System Design Space
Knowledge graphSettings

Updated: March 24, 2026 at 2:56 PM

AI in SDLC: the path from assistants to agents by Alexander Polomodov

medium

Extended report on the transition from AI assistants to agent scenarios in the SDLC: tools, protocols, governance, performance assessment and practical implementation cases.

AI in the SDLC becomes interesting the moment an assistant stops being a helper and starts participating in the working loop.

The chapter shows how agentic workflows change development tooling through protocol design, governance, evaluation, and the practical limits of adoption inside a real team.

In design reviews, it helps you discuss autonomy boundaries, the trust model, and AI's role in delivery without drifting into generic talk about the future of software development.

Practical value of this chapter

Design in practice

Translate guidance on AI evolution in SDLC from assistants to agents and delivery impact into architecture decisions for data flow, model serving, and quality control points.

Decision quality

Evaluate system quality through both model and platform metrics: precision/recall, latency, drift, cost, and operational risk.

Interview articulation

Frame answers as data -> model -> serving -> monitoring, showing where constraints appear and how you manage them.

Trade-off framing

Make trade-offs explicit for AI evolution in SDLC from assistants to agents and delivery impact: experiment speed, quality, explainability, resource budget, and maintenance complexity.

AI in SDLC: the path from assistants to agents by Alexander Polomodov

An extended version of the report on how engineering teams are moving from AI assistants to agent-based scenarios and rebuilding development processes for Software Engineering 3.0.

Format:Tech talk / AI + Platform Engineering
Focus:Agentic workflows, MCP/A2A, AI governance, impact measurement in SDLC
Context:Evolution of the previous report about AI in a large company

Source

Telegram: book_cube

The main post for the report with the structure of topics and the context of the speech.

Read post

Key trajectory

Stage 1

Assistants: copilot mode in IDE

The first scripts focused on autocompletion, template generation, and speeding up local development.

Stage 2

Go to agents

The agent acts on a goal: it plans steps, invokes tools, operates on the repository, and returns a measurable result.

Stage 3

From point-solution to platform

We need protocols and a common infrastructure: context, security, auditing, access control and standardized integrations.

Stage 4

Software Engineering 3.0

SDLC is shifting to a model of “a person sets an intent, an agent executes, a person validates and decides to release.”

Agent scenarios in platform practice

  • Agent mode in product development (demo on the case of the game “5 letters”).
  • Agent in python notebook for working with data and speeding up the analytical cycle.
  • Agent for QA and test-case generation with an emphasis on covering risky scenarios.
  • Agent for code review: search for defects, smell patterns and violations of standards.
  • Vulnerability detection agent (safeliner) in the secure SDLC.

Related chapter

Programming Meanings by Alexey Gusakov (CTO Yandex)

Transition to intent-driven development and product-ML cycles.

Open chapter

Infrastructure and economic drivers

Economics of agency

Reducing compute costs and increasing quality of foundation models make multi-step agent scenarios practical.

Integration protocols

MCP and A2A approaches reduce the cost of connecting tools and simplify the orchestration of agent-to-tool and agent-to-agent flows.

Tool base

Practice is already moving towards specialized agents and CLI modes (for example, Claude Code, OpenAI Codex).

Management and regulation of agency

  • The agent management model should set levels of autonomy and decision-making boundaries.
  • Critical actions (security, prod-configs, migrations) require human-in-the-loop and traceable approval flow.
  • Evaluation of the result should include not only speed, but also quality: defects, vulnerabilities, cost of rollback.
  • Agent metrics become part of the DX platform and engineering management at the organizational level.

Related topic

Observability & Monitoring Design

How to build a measurable feedback loop for production platforms.

Open chapter

Measuring the effectiveness of assistants and agents

Productivity

Lead time, cycle time, throughput, time-to-first-PR

Quality

Defect escape rate, rework rate, flaky tests, review findings

Reliability and safety

Security findings, policy violations, rollback rate, incident impact

Acceptance by engineers

DAU/WAU, retention, share of tasks with agents, reasons for refusal

Links and materials

Related chapters

Enable tracking in Settings