In a similar situation. I’m even an AI proponent. I think it’s a great tool when used properly. I’ve had great success solving basically trivial problems with small scripts. And code review is helpful. Code complete is helpful. It makes me faster, but you have to know when and how to leverage it.
Even on tasks it isn’t good at, it often helps me frame my own thoughts. It can identify issues better than it can fix them. So if I say here is the current architecture, what is the best way to implement <feature> and explain why, it will give a plan. It may not be a great plan, but as it explains it, I can easily identify the stuff it has wrong. Sometimes it’s close to a workable plan. Other times it’s not. Other times it will confidently lead you down a rabbit hole. That’s the real time waster.
“Why won’t the context load for this unit test?”
You’re missing this annotation.
“Yeah that didn’t do it. What else.”
You need this plugin.
“Yeah it’s already there.”
You need this other annotation.
“Okay that got a different error message.”
You need another annotation
“That didn’t work either. You don’t actually know what the problem is do you?”
Sad computer beeps.
To just take the output and run with it is inviting disaster. It’ll bite you every time and the harder the code the worse it performs.
This has been my experience as well, only the company I work for has mandated that we must use AI tools everyday (regardless of whether we want/need them) and is actively tracking our usage to make sure we comply.
My productivity has plummeted. The tool we use (Cursor) requires so much hand-holding that it’s like having a student dev with me at all times… only a real student would actually absorb information and learn over time, unlike this glorified Markov Chain. If I had a human junior dev, they could be a productive and semi-competent coder in 6 months. But 6 months from now, the LLM is still going to be making all of the same mistakes it is now.
It’s gotten to the point where I ask the LLM to solve a problem for me just so that I can hit the required usage metrics, but completely ignore its output. And it makes me die a little bit inside every time I consider how much water/energy I’m wasting for literally zero benefit.
That sounds horrific. Maybe you can ask the AI to write a plugin that automatically invokes the AI in the background and throws away the result.
We are strongly encouraged to use the tools, and copilot review is automatic, but that’s it. I’m actually about to accept a leadership position in another AI heavy company and hopefully I can leverage that position to guide a sensible AI policy.
But at the heart of it, I need curious minds that want to learn. Give me those and I can build a strong team with or without AI. Without them, all the AI in the world won’t help.
In a similar situation. I’m even an AI proponent. I think it’s a great tool when used properly. I’ve had great success solving basically trivial problems with small scripts. And code review is helpful. Code complete is helpful. It makes me faster, but you have to know when and how to leverage it.
Even on tasks it isn’t good at, it often helps me frame my own thoughts. It can identify issues better than it can fix them. So if I say here is the current architecture, what is the best way to implement <feature> and explain why, it will give a plan. It may not be a great plan, but as it explains it, I can easily identify the stuff it has wrong. Sometimes it’s close to a workable plan. Other times it’s not. Other times it will confidently lead you down a rabbit hole. That’s the real time waster.
“Why won’t the context load for this unit test?”
You’re missing this annotation.
“Yeah that didn’t do it. What else.”
You need this plugin.
“Yeah it’s already there.”
You need this other annotation.
“Okay that got a different error message.”
You need another annotation
“That didn’t work either. You don’t actually know what the problem is do you?”
Sad computer beeps.
To just take the output and run with it is inviting disaster. It’ll bite you every time and the harder the code the worse it performs.
This has been my experience as well, only the company I work for has mandated that we must use AI tools everyday (regardless of whether we want/need them) and is actively tracking our usage to make sure we comply.
My productivity has plummeted. The tool we use (Cursor) requires so much hand-holding that it’s like having a student dev with me at all times… only a real student would actually absorb information and learn over time, unlike this glorified Markov Chain. If I had a human junior dev, they could be a productive and semi-competent coder in 6 months. But 6 months from now, the LLM is still going to be making all of the same mistakes it is now.
It’s gotten to the point where I ask the LLM to solve a problem for me just so that I can hit the required usage metrics, but completely ignore its output. And it makes me die a little bit inside every time I consider how much water/energy I’m wasting for literally zero benefit.
That sounds horrific. Maybe you can ask the AI to write a plugin that automatically invokes the AI in the background and throws away the result.
We are strongly encouraged to use the tools, and copilot review is automatic, but that’s it. I’m actually about to accept a leadership position in another AI heavy company and hopefully I can leverage that position to guide a sensible AI policy.
But at the heart of it, I need curious minds that want to learn. Give me those and I can build a strong team with or without AI. Without them, all the AI in the world won’t help.