Been doing sysadmin since the 90's. Why bother with AI, it just slows me down. I've already scripted my life with automation. Anything not already automated probably takes me a few minutes, and if longer I'll build the automation. Shell scripts and Ansible aren't hard.
What if you took those scripts that you have used to automate your life, dumped them into something like cursorAI and asked the model to refine them, make them better, improve the output, add interactivity, generalize them, harden them, etc?
Sometimes when I have time to play around I just ask models what stinks in my code or how it could be better with respect to X. It's not always right or productive but it is fun, you should try it!
I spent the entirety of yesterday, from around 8:30 until almost exactly 5pm, doing a relatively straightforward refactor to change the types of every identifier in our system, from the protobuf the database, from a generic UUID type to a distinct typesafe wrapper around UUID for each one. This is so that passing ID’s to functions expecting identifiers for one particular type vs another, is less error prone.
It was a nonstop game of my IDE’s refactoring features, a bunch of `xargs perl -pi -e 's/foo/bar/;', and repeatedly running `cargo check` and `cargo clippy --fix` until it all compiled. It was a 4000+ line change in the end (net 700 lines removed), and it took me all of that 8.5 hours to finish.
Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.
So yeah, a typical day is basically 70% coding, 20% meetings, and 10% slack communication. I use AI only to bounce ideas off of, as it seems to do a pisspoor job of maintenance work on a codebase. (I rarely get to write the sort of greenfield code that AI is normally better at.)
I hope you come back 2 years from now and let us know if you still have a job and how your wage has been developing. The way you describe your workflow does not look promising.
- reading papers, blogs, articles, searching google scholar, and chatting with perplexity about them to help find other papers
- writing research proposals based on my reading and previous research
- analysing data lately this means asking Claude code to generate a notebook for playing around with the data
- writing codes to explore some data or make some model of data, this also has a lot of Claude code interaction these days
- meetings, slack, email
- doing paper and proposal reviews which includes any or all of the above tasks plus writing my opinion in whatever format is expected
- travelling somewhere to meet with colleagues at a conference or their workplace to do some collaboration that includes any or all of the above plus also giving talks
- organising events that bring people together to do any to all of the above together
I’m a soft money research scientist with a part time position in industry working as a consultant.
I’m not an analyst but a product owner and developer. Sometimes I feel similarly. However, I notice a very interesting thing when I turn the AI off for a day and go back to the pre-copilot days: my work is more focused, concise, comprehensible, impactful, and memorable.
Still I have this feeling that AI is very close to “doing my work” but yet when I step back I see it may be a rather seductive mirage.
Very unclear. Hard to see with the silicon-colored glasses on.
I use AI almost exclusively for search, and usually force myself to grind against a problem a little before engaging it. I treat AI as a smart codemod tool when I do use it for software development: against easily verifiable, well-defined tasks, low mental effort but higher time commitment tasks.
I keep a list of "rules of engagement" with AI that I try to follow so it doesn't rob me of cognitive engagement with tasks.
As a developer, it's a lot of docs and code reading. And writing reports on tickets. Sometimes some deep planning. Writing code is the guilty pleasure of the day.
Mine is probably 60% AI, 25% structured research, 10% copy-paste, and 5% staring at the screen until inspiration strikes.
AI helps me brainstorm and speed up repetitive tasks, but I always double-check and refine everything manually. The “panic” part is real though — especially when deadlines creep up faster than expected.
Curious to see how others balance between AI assistance and good old-fashioned problem-solving.
10-20% AI. Typically for boring and repetitive coding tasks. Also for some quick reseach, bouncing off ideas, or finding different ways to do certain things.
The rest is actual coding (where using AI typically slows me down), design, documentation, handling production incidents, monitoring, etc.
As an analyst it is your job to prepare valuable information to other. If you drop AI generated stuff unreflected, uncorrected and outdated at people, you will loose your job. I am starting to reject meeting minutes created by AI which are not understood by the writer and are not polished.
AI is a tool for you to create better results not an opportunity to offload thinking to others (like it is now done so often)
You're correct, but I'm sure his boss feels differently. In my experience businesses really care very little for precision and excellence in your work product. I don't just mean they cannot recognize excellence (although this is true as well) but that they actively dislike precision and would lump a lot of that sort of effort all into one bucket they might describe as "being too pedantic," or "not seeing the bigger picture," or some other epithet which _could_ be true, but in practice just means "I just want stuff done quickly and don't care if it's half-assed. We just need to report success to management which will never know or care about the truth."
This. Most bosses are so obsessed with applying Pareto's 80/20 rule in all situations (even when it does not apply) that most would trade velocity for accuracy without thinking. Frankly, I doubt the average manager would know wrong data when confronted with it.
Previously, we always had the output of office work tightly associated with the accountability that the output implies. Since the output is visible and measurable, but accountability isn't, when it became possible to generate plausible-looking professional output, most people assumed that that's all there is to the work.
We're about to discover that an LLM can't be chastened or fired for negligence.
And yet, I've realized that a few research and brainstorming sessions with LLMs I thought were really good and insightful were just the LLM playing "yes and" improv with me, and reinforcing my beliefs, regardless whether I was right or wrong.
My workflow is probably 20% coding, 50% thinking about how to code whatever needs fixing, 10% looking at metrics and 20% getting distracted. AI has proven almost entirely useless for the type of disentangling spaghetti code that makes up most of my work. Then again I'm not an analyst so you do you.
Same boat, and if anything AI has made the "disentangling spaghetti code" part even more annoying. In the past, badly designed code at least hat the decency to also look bad on first glance.
In any given week I spend 50 - 60% of my time in meetings. Half of that time is listening to PMs madlib ideas for how AI is going to do everything for us and and the other half is spent listening to junior developers and analysts make excuses for why they haven't gotten anything done in the last week, despite using AI to try to get their jobs done. Across 5 projects employing 15 people, I am the only senior developer and have as much experience as everyone else combined.
I spend 20 - 30% of my week on administrative paperwork. Making sure people are taking their required trainings. Didn't we just do the cyber security ones? Yes, we did, but IT got hacked and lost all the records that we did, so we need to do it again.
I spend 10 - 20% of my week trying to write documentation that Security tells me is absolutely required but has never gotten me any answers from them on whether they are going to approve any of my applications for deployment. In the last 2 years, I've gotten ONE application deployed and I had to weaponize my org chart to get it to happen.
That leaves me about -10 - 20% of the week to get the vast majority of all of the programming done on our projects. Which I do. If you look at the git log, my name dominates.
I don't use AI to write code because I don't have time to dick around with bad results.
I don't use AI to write any of my documentation or memos. People generally praise my communication skills for being easy to read. I certainly don't have time to edit AI's shitty writing.
The only time I use AI is when someone from corporate asks me to "generate an AI-first strategy for blah blah blah". I think it's a garbage initiative so I give them garbage work. It seems to make them happy and then they go away and I go back to writing all the code by hand. Even then, I don't copy-paste the response, I type it out long while reading it, just in case anyone asks me any questions later. Despite everyone telling me "typing speed isn't important to a software developer," I type around 100WPM, so it doesn't take too long. Not blazing fast, but a lot faster than every other developer I know.
So, forgive me if I don't have a lot of sympathy for you. You sound like half the people in my company, claiming AI makes them more productive, yet I can't see anywhere in any hard artifacts where that productivity has occurred.
1% AI for productive work. My work is training, developing training, experimental testing and writing experimental test tools that become part of my training. The training is all about software testing.
I find that it’s easier to write code than to write English statements describing code I want written.
I can’t phone this work in. It has to be creative and also precise.
I know no way to design useful training experiences using AI. It just comes out as slop.
When I am coding, I use Warp. It often suggests bug fixes, ajd I do find that these are worth accepting, generally speaking.
I generally come in at least fifteen minutes late after that I sorta space out for an hour. I just stare at my desk, but it looks like I'm working. I do that for probably another hour after lunch too, I'd say in a given week I probably only do about fifteen minutes of real, actual, work.
My latest take is: AI amplifies human intent. For now at least, it very much needs someone with vision to guide and leverage it, and this can easily be a full time job.
That means OP’s job may be _safer_, because they are getting higher leverage on their time.
It’s their colleague who’s ignoring AI that I see as higher risk.
0% AI, 80% YAML Jockey, 10% SSH Shenanigans, 10% Python programming
Been doing sysadmin since the 90's. Why bother with AI, it just slows me down. I've already scripted my life with automation. Anything not already automated probably takes me a few minutes, and if longer I'll build the automation. Shell scripts and Ansible aren't hard.
What if you took those scripts that you have used to automate your life, dumped them into something like cursorAI and asked the model to refine them, make them better, improve the output, add interactivity, generalize them, harden them, etc?
Sometimes when I have time to play around I just ask models what stinks in my code or how it could be better with respect to X. It's not always right or productive but it is fun, you should try it!
> add interactivity
just what I want, interactivity in my ansible playbook
> It's not always right or productive but it is fun, you should try it!
yey, introducing bugs for literally no reason!
I spent the entirety of yesterday, from around 8:30 until almost exactly 5pm, doing a relatively straightforward refactor to change the types of every identifier in our system, from the protobuf the database, from a generic UUID type to a distinct typesafe wrapper around UUID for each one. This is so that passing ID’s to functions expecting identifiers for one particular type vs another, is less error prone.
It was a nonstop game of my IDE’s refactoring features, a bunch of `xargs perl -pi -e 's/foo/bar/;', and repeatedly running `cargo check` and `cargo clippy --fix` until it all compiled. It was a 4000+ line change in the end (net 700 lines removed), and it took me all of that 8.5 hours to finish.
Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.
So yeah, a typical day is basically 70% coding, 20% meetings, and 10% slack communication. I use AI only to bounce ideas off of, as it seems to do a pisspoor job of maintenance work on a codebase. (I rarely get to write the sort of greenfield code that AI is normally better at.)
I skip reading anything "written" by my analyst and get on with the work.
If I have a question I can just ask ChatGPT, perplexity and Gemini.
>If I have a question I can just ask ChatGPT, perplexity and Gemini
Which get their knowledge (training data) on relevant topics from analysts. Which increasingly use ChatGPT and the rest to produce them.
Enough loops of this, and analyst writings and ChatGPT responses on market analysis will soon reach the same "useless bullshit" parity.
Indeed. I get the vibe that the submission author doesn't really check the results of LLM output either. Which of course speeds this process up.
I hope you come back 2 years from now and let us know if you still have a job and how your wage has been developing. The way you describe your workflow does not look promising.
It sounds like you will be forever limited by the AI, and easy to replace.
This should be concerning.
I spend my time like this
- reading papers, blogs, articles, searching google scholar, and chatting with perplexity about them to help find other papers
- writing research proposals based on my reading and previous research
- analysing data lately this means asking Claude code to generate a notebook for playing around with the data
- writing codes to explore some data or make some model of data, this also has a lot of Claude code interaction these days
- meetings, slack, email
- doing paper and proposal reviews which includes any or all of the above tasks plus writing my opinion in whatever format is expected
- travelling somewhere to meet with colleagues at a conference or their workplace to do some collaboration that includes any or all of the above plus also giving talks
- organising events that bring people together to do any to all of the above together
I’m a soft money research scientist with a part time position in industry working as a consultant.
If I were your boss I would fire you.
it is pure comedy someone has this job over another who could actually perform the work and felt the need to post this
I’m not an analyst but a product owner and developer. Sometimes I feel similarly. However, I notice a very interesting thing when I turn the AI off for a day and go back to the pre-copilot days: my work is more focused, concise, comprehensible, impactful, and memorable.
Still I have this feeling that AI is very close to “doing my work” but yet when I step back I see it may be a rather seductive mirage.
Very unclear. Hard to see with the silicon-colored glasses on.
I use AI almost exclusively for search, and usually force myself to grind against a problem a little before engaging it. I treat AI as a smart codemod tool when I do use it for software development: against easily verifiable, well-defined tasks, low mental effort but higher time commitment tasks.
I keep a list of "rules of engagement" with AI that I try to follow so it doesn't rob me of cognitive engagement with tasks.
As a developer, it's a lot of docs and code reading. And writing reports on tickets. Sometimes some deep planning. Writing code is the guilty pleasure of the day.
Mine is probably 60% AI, 25% structured research, 10% copy-paste, and 5% staring at the screen until inspiration strikes. AI helps me brainstorm and speed up repetitive tasks, but I always double-check and refine everything manually. The “panic” part is real though — especially when deadlines creep up faster than expected. Curious to see how others balance between AI assistance and good old-fashioned problem-solving.
10-20% AI. Typically for boring and repetitive coding tasks. Also for some quick reseach, bouncing off ideas, or finding different ways to do certain things.
The rest is actual coding (where using AI typically slows me down), design, documentation, handling production incidents, monitoring, etc.
90% Bazel, 5% AI, 5% day dreaming.
Not sure if the Bazel or AI part is worse. :-D I think Bazel.
50% existential crisis, 50% meetings, 50% work a grad should be doing.
I've just taken a week off to help extended family with a project, and it's reminded me what a good job is.
Where's the part where you verify what AI has given?
When we write programs that "learn", it turns out that we do and they don't. —- Alan Perlis
So what do you learn?
My flow (with legacy software) is: manual strip > LLM > manual clean up > repeat
I really hope you don't just get your data from ChatGPT
Mine is doing the work… o_O
Same.
People these days do everything to avoid actually programing but still they wanna call themselves a programmer
Programming is much more than typing code on a keyboard.
80% AI 10% guidance 10% angry guidance and 1% panic
Ask HN: will my boss prompt AI himself?
As an analyst it is your job to prepare valuable information to other. If you drop AI generated stuff unreflected, uncorrected and outdated at people, you will loose your job. I am starting to reject meeting minutes created by AI which are not understood by the writer and are not polished.
AI is a tool for you to create better results not an opportunity to offload thinking to others (like it is now done so often)
You're correct, but I'm sure his boss feels differently. In my experience businesses really care very little for precision and excellence in your work product. I don't just mean they cannot recognize excellence (although this is true as well) but that they actively dislike precision and would lump a lot of that sort of effort all into one bucket they might describe as "being too pedantic," or "not seeing the bigger picture," or some other epithet which _could_ be true, but in practice just means "I just want stuff done quickly and don't care if it's half-assed. We just need to report success to management which will never know or care about the truth."
This. Most bosses are so obsessed with applying Pareto's 80/20 rule in all situations (even when it does not apply) that most would trade velocity for accuracy without thinking. Frankly, I doubt the average manager would know wrong data when confronted with it.
That's precisely it.
Previously, we always had the output of office work tightly associated with the accountability that the output implies. Since the output is visible and measurable, but accountability isn't, when it became possible to generate plausible-looking professional output, most people assumed that that's all there is to the work.
We're about to discover that an LLM can't be chastened or fired for negligence.
I didn't think LLM's sycophancy would work on me.
And yet, I've realized that a few research and brainstorming sessions with LLMs I thought were really good and insightful were just the LLM playing "yes and" improv with me, and reinforcing my beliefs, regardless whether I was right or wrong.
My workflow is probably 20% coding, 50% thinking about how to code whatever needs fixing, 10% looking at metrics and 20% getting distracted. AI has proven almost entirely useless for the type of disentangling spaghetti code that makes up most of my work. Then again I'm not an analyst so you do you.
Same boat, and if anything AI has made the "disentangling spaghetti code" part even more annoying. In the past, badly designed code at least hat the decency to also look bad on first glance.
In any given week I spend 50 - 60% of my time in meetings. Half of that time is listening to PMs madlib ideas for how AI is going to do everything for us and and the other half is spent listening to junior developers and analysts make excuses for why they haven't gotten anything done in the last week, despite using AI to try to get their jobs done. Across 5 projects employing 15 people, I am the only senior developer and have as much experience as everyone else combined.
I spend 20 - 30% of my week on administrative paperwork. Making sure people are taking their required trainings. Didn't we just do the cyber security ones? Yes, we did, but IT got hacked and lost all the records that we did, so we need to do it again.
I spend 10 - 20% of my week trying to write documentation that Security tells me is absolutely required but has never gotten me any answers from them on whether they are going to approve any of my applications for deployment. In the last 2 years, I've gotten ONE application deployed and I had to weaponize my org chart to get it to happen.
That leaves me about -10 - 20% of the week to get the vast majority of all of the programming done on our projects. Which I do. If you look at the git log, my name dominates.
I don't use AI to write code because I don't have time to dick around with bad results.
I don't use AI to write any of my documentation or memos. People generally praise my communication skills for being easy to read. I certainly don't have time to edit AI's shitty writing.
The only time I use AI is when someone from corporate asks me to "generate an AI-first strategy for blah blah blah". I think it's a garbage initiative so I give them garbage work. It seems to make them happy and then they go away and I go back to writing all the code by hand. Even then, I don't copy-paste the response, I type it out long while reading it, just in case anyone asks me any questions later. Despite everyone telling me "typing speed isn't important to a software developer," I type around 100WPM, so it doesn't take too long. Not blazing fast, but a lot faster than every other developer I know.
So, forgive me if I don't have a lot of sympathy for you. You sound like half the people in my company, claiming AI makes them more productive, yet I can't see anywhere in any hard artifacts where that productivity has occurred.
About 50% AI, 30% synthesizing what I found, 10% copy paste, 10% manual research.
be prepared to be laid off
You should add "Ask HN:" to the title.
Gemini would have told them that...
1% AI for productive work. My work is training, developing training, experimental testing and writing experimental test tools that become part of my training. The training is all about software testing.
I find that it’s easier to write code than to write English statements describing code I want written.
I can’t phone this work in. It has to be creative and also precise.
I know no way to design useful training experiences using AI. It just comes out as slop.
When I am coding, I use Warp. It often suggests bug fixes, ajd I do find that these are worth accepting, generally speaking.
I generally come in at least fifteen minutes late after that I sorta space out for an hour. I just stare at my desk, but it looks like I'm working. I do that for probably another hour after lunch too, I'd say in a given week I probably only do about fifteen minutes of real, actual, work.
What if -- and this is a hypothetical -- you were offered some kind of "stock option" or "equity share" scheme?
Good luck with your layoffs. I hope your firings go really well.
Office Space quote?
You’re the kind of go getter that has upper management written all over you.
Um, I'm gonna need you to go ahead and come in tomorrow. So if you could be here around 9:00, that would be great. Mm-Kay
Isn’t this basically utopia come? Why the doom and gloom? You’re getting paid to do practically nothing.
I never understood how George Jetson has so much friction with his boss when all he has to do is press the button.
This is a quote from “Office Space” by Mike Judge of “King of the Hill” and “Silicon Valley” fame. it is a great movie you should check it out!
> Mike Judge of “King of the Hill” and “Silicon Valley”
And, more importantly, Beavis and Butthead.
So your work can be automated by AI. Don't tell your boss or you are fired
OP listed N tools they stitch together in a creative and thoughtful way (“30% brainstorming”) which happen to leverage AI.
What’s the fireable offense? Does the boss want to stitch those tools together themselves?
If the output is crap- regardless of the tool- that’s a different story, and one we don’t have enough info to evaluate.
There has to be no offence, its cost reduction for the company.
It depends how mission critical his brainstorming is for the company. LLMs can brainstorm too.
My latest take is: AI amplifies human intent. For now at least, it very much needs someone with vision to guide and leverage it, and this can easily be a full time job.
That means OP’s job may be _safer_, because they are getting higher leverage on their time.
It’s their colleague who’s ignoring AI that I see as higher risk.