top of page

AI Coding Tools - The developerʼs perspective

  • Writer: Anna Kucherenko
    Anna Kucherenko
  • Aug 27
  • 4 min read


These days, dozens, if not hundreds, of AI coding assistants are available, all designed to make developers’ lives easier and speed up the process of building software systems.

The reactions to these tools are mixed. Some developers see them as powerful companions, capable of handling almost any well-defined task. Others remain more skeptical, believing we are still far from a world dominated by machines.


To explore this further, we at Netminds decided to launch a series of interviews with our senior engineers. They’ll share their experiences with different tools and the real outcomes these tools can deliver - valuable insights for developers who haven’t tried them yet.


This article is the first in the series. Stay tuned for more insights and lessons learned.



It's like a Junior Developer reporting to you.


I have been working with Copilot in Visual Studio (both code and classic), and little by little, it has become an integral part of my work routine. Here are some of my outtakes from practical usage of it. Used it for .NET projects written in C# and for Terraform.


Agent Mode

An invaluable thing that speeds up refactoring to light speed. Other great examples of use is when you need to prototype something or do a mundane task like adding fields to the Database and models across projects. But beware, the agent is using context from your solution, so if you have a lot of legacy or "smelly" code - be ready to sub-optimal suggestions of approaches Agent tries. In such situation, you should be very specific with context hints for it to know how the code should look like.


System Tests

Copilot is usually a great tool to add some additional test cases to the existing tests, but if you ask it to create tests from scratch, the results may be sub-optimal, and you might end up using the same amount of time fixing them as it would take to write them in the first place. So think twice here.


Different Models

While experimenting with different models like Gemini, Claude, GPT I found that Claude produces the most accurate results in both Ask and Agent modes, but I encourage you to experiment with your tasks; maybe for your use cases, the other one is better. It's really a matter of trial and error.


Incorrect responses

During the time with Copilot I've encountered cases with invalid suggestions/responses from Copilot. Usually this happens when you're doing research in a field with a lot of ambiguous terminology or a software that has a lot of versions and different documentation versions just got mashed together in the Copilot's brain :). To resolve this, once again, just try to be very specific in your prompts. But still, in some cases, it is much more productive to visit a documentation page of a library or software that you're trying to use.


Overall Feel

Using Copilot in agent mode kinda feels like you're having a personal Junior Developer next to you. You can delegate some tasks to it; sometimes results are bad, but overall it really helps with offloading you and allows focusing on a more complex tasks.


Mykola Piatkovskyi, Senior .NET Engineer at Netminds 



AI assistants are overrated.


I can't deny the usability of AI assistants, but from my experience, they are still not mature yet. 


I've been actively testing the Windsurf coding assistant for the last 3 months and it's really great for automating simple routines or handling straightforward tasks. But once you use it for slightly more advanced scenarios, it tends to approximate and guess something without a context. 


Among obvious benefits, I'd mention that it's great to have a chat embedded into your IDE where you can ask questions and get aggregated knowledge from the internet in a single reply. It's also quite good at adding comments and documentation, suggesting variable names, analyzing JSON files, building models based on some input structures (for example, DB tables or JSON documents), writing simple unit tests, and recognizing the pattern of what you're doing and offering to continue in the same way. 


But back to my initial quote - why "are they overrated"? Once you try some slightly complicated scenarios, most of the time it fails. If I ask it to refactor some business logic, it would make the code look nice, but it often won't work anymore. Windsurf advertises that it analyzes the project to have a better context, but when it tries to suggest how to finish what you started, it will invent fields that already exist (just named slightly differently) or rely on things that have never been there. So, most of the time, even if its suggestion looks legit, once you accept it, you usually have to adjust it and fix it to make it work.


Fun fact: as someone using the ReSharper extension for Visual Studio together with the Windsurf extension, I notice that they are constantly fighting over who gets to make suggestions, often conflicting with each other and making both suggestions unusable.


Yaroslav Pohlod, Senior .NET Engineer, Team Lead at Netminds 



As of Q3 2025, artificial intelligence is steadily making its way into the workflows of companies across every industry. For the IT industry, for now, its role remains largely supportive: AI can lighten software developers’ workloads, accelerate routine tasks, and offer fresh insights. Still, it requires clear guidance and careful oversight to deliver truly high-quality results. Six to twelve months from now, though, the picture may look very different.


What do you think? Has your experience with AI assistants been similar or completely different? Share your thoughts in the comments below.

 
 
 

Comments


bottom of page