AI Coding Tools - The developer's perspective
- Anna Kucherenko
- Oct 22
- 4 min read

In the next part of our AI-in-development series, Dmytro Kravchyna, Lead Software Developer at Netminds, shares his hands-on experience using AI tools in real-world software projects, from research and planning to coding, testing, and automation. This article wraps up the series with a practical look at where AI truly shines in daily engineering workflows, and where human expertise still makes all the difference.
I actively use AI on my current project, for both planning upcoming work and hands-on coding. Below are the concrete ways it helps, where it shines, and the limits I've run into.
1) Planning & technical due diligence (ChatGPT “Deep Research”)
When I need a quick but thorough briefing (pros/cons of an approach, rough cost ranges, alternatives, comparison tables, and synthesized summaries), I reach for ChatGPT Deep Research. It runs multi-step web research, reads sources, and returns a cited report you can verify. It's designed to browse, analyze, and synthesize many sources and produce a documented write-up, and it now integrates with agent mode for a deeper, visual browser when needed. (OpenAI)
Where it fits: preparing for tasks (spikes, option analysis).
Where it doesn't: it’s not a substitute for formal market research or a professional analyst's report—you still need domain expertise to validate results and assumptions. (OpenAI itself notes limitations and the need to verify outputs.) (OpenAI)
2) Coding with GitHub Copilot (commercial license)
Over time, I've converged on three distinct usage patterns.
A) Prompt-to-Code with Agent Mode
Copilot's Agent Mode can plan, edit multiple files, run tools/terminal commands, watch build and test output, and iterate until the task completes—directly inside VS Code or Visual Studio. In practice, it's great for refactors, scaffolding endpoints/services, or stitching together cross-layer changes. You stay in control and approve edits. (Visual Studio Code, Microsoft Learn)
clear, detailed prompts matter. Expect to spend time on context (paths, naming, error handling, logging, attributes). The upside is reusability: once you craft a good prompt, you can reuse it across similar tasks (VS Code supports reusable instruction files and prompt files). (Visual Studio Code)
My "minimal viable prompt" example (works well for ASP.NET Core service+controller scaffolding):
You are a senior ASP.NET Core Web API and Business Services architect. Scaffold the {MethodName} endpoint and its service methods across layers.
DELIVERABLES (in this order)
Controller action: add C# method {MethodName} to {ControllerName}.cs.
Service contract: add method signature to {ServiceInterfaceName}.cs.
Service implementation: implement {MethodName}Async in {ServiceImplementationName}.cs.
Documentation & style: include XML docs, preserve existing code style, namespaces, and comments.
Safety: do not modify unrelated classes or methods.
PROJECT LAYOUT
Controller file: *.WebAPI\Controllers\{ControllerName}.cs
Service interface: *.BusinessServices.Interface\{ServiceInterfaceName}.cs
Service implementation: *.BusinessServices.Implementation\{ServiceImplementationName}.cs
MODELS
Request DTO: {RequestModel} (Properties: {RequestProperties})
Response DTO: {ResponseModel} (Properties: {ResponseProperties})
ENDPOINT & ATTRIBUTES
HTTP verb: {HttpMethod}
Apply: [ApiDocSummary], [ApiDocDescription], [ApiDocParameterDescription], [ApiDocQueryDescription], [ApiDocResponse(typeof(ApiResponseWrapper<{ResponseModel}>))]
Use [FromRoute], [FromQuery], [ListFilters] where appropriate.
Bind request parameters correctly and validate inputs (basic guard clauses).
LOGGING & ERROR HANDLING
At method start, log: _logger.LogInformation("{MethodName} started");
Wrap service call in try/catch(Exception); on error, log and return 500 with ApiError.
If service response.Status == false, return base.StatusCode(response.StatusCode, new ApiError(...)).
SERVICE CONTRACT & IMPLEMENTATION
Add to interface: Task<ApiResponseWrapper<{ResponseModel}>> {MethodName}Async({RequestModel} request);
Implement in service:
Log start and errors.
Call underlying client/repository.
Return the wrapped response.
Add <inheritdoc/> XML comment.
OUTPUT FORMAT
Provide three separate code blocks, one per file, in this exact order:{ControllerName}.cs — full action method inserted in place, preserving style.{ServiceInterfaceName}.cs — added interface signature with brief XML summary.{ServiceImplementationName}.cs — full method implementation with logging and error handling.
Use the correct namespaces and file paths listed above.
Do not include or alter unrelated code outside the shown insertions.
CONSTRAINTS
Keep naming consistent with existing project conventions.
Prefer async/await patterns and cancellation tokens if already used in the project.
Avoid breaking changes to public APIs unless explicitly required.
If any placeholder ({...}) lacks information, state reasonable assumptions inline as TODO comments and proceed.
B) Prompt-over-Code Testing & maintaining existing code
For test work, Copilot can generate unit tests, run them, and iterate on failures as part of agent workflows; there are official guides for generating tests (xUnit, etc.) and for streamlining test authoring/fixing in both Visual Studio and VS Code. It’s also useful for explaining code, proposing fixes, and generating concise summaries or commit messages (with configurable rules). This offloads a lot of routine work. (Microsoft Learn, Visual Studio Code, GitHub Docs)
Note: commit-message generation exists across Microsoft’s tooling; treat it as a draft and add the why, not just the what. (Microsoft for Developers, GitHub Docs)
C) Prompt-Script-Code for automation via scripts & code generation
AI is good at writing the scripts that write the code—useful wherever you need stable, repeatable outputs that are diff-friendly in Git:
OpenAPI → DTOs/clients/server stubs: Use OpenAPI Generator locally or in CI to produce typed models and HTTP clients from a Swagger/OpenAPI spec. There’s an off-the-shelf GitHub Action for this. (GitHub)
Design tokens → UI artifacts: Style Dictionary converts design tokens into platform assets (CSS vars, Android/iOS resources, docs). You can wire it to Tokens Studio transforms and run it in CI, and even feed those outputs into scaffolded UI components. (Style Dictionary, Tokens Studio)
End-to-end CI/CD hooks: token and codegen pipelines are commonly automated (e.g., via GitHub Actions). (Michael Mang)
Limitations: these pipelines still require scripting experience and code review. Good results rarely come from a single prompt; iterate and pin script versions for reliability.
Practical pros & cons I've seen:
Pros
Faster refactors and scaffolding via Agent Mode with in-editor multi-file edits, terminal commands, and self-healing loops. (Visual Studio Code, Microsoft Learn)
Rapid test coverage improvements and easier maintenance (tests, explanations, summaries, commit drafts). (Microsoft Learn, Visual Studio Code)
Reusable prompts/instruction files that enforce team conventions. (Visual Studio Code)
Reliable, repeatable asset/code generation baked into CI (OpenAPI, design tokens). (GitHub,Style Dictionary)
Cons
Prompt engineering cost: high-quality prompts and context take time (the payoff is reuse). (Visual Studio Code)
Environment sensitivity: agents reflect your repo’s current patterns; unclear context yields uneven results. (Visual Studio Code)
Research caveats: Deep Research produces cited, useful briefs, but it's not a replacement for professional market analysis; verify critical claims. (OpenAI)
Dmytro Kravchyna, Lead Software Developer at Netminds



Comments