AI in SDLC: the path from assistants to agents
An extended version of the report on how engineering teams are moving from AI assistants to agent-based scenarios and rebuilding development processes for Software Engineering 3.0.
Source
Telegram: book_cube
The main post for the report with the structure of topics and the context of the speech.
Key trajectory
Assistants: copilot mode in IDE
The first scripts focused on autocompletion, template generation, and speeding up local development.
Go to agents
The agent acts on a goal: it plans steps, invokes tools, operates on the repository, and returns a measurable result.
From point-solution to platform
We need protocols and a common infrastructure: context, security, auditing, access control and standardized integrations.
Software Engineering 3.0
SDLC is shifting to a model of “a person sets an intent, an agent executes, a person validates and decides to release.”
Agent scenarios in platform practice
- Agent mode in product development (demo on the case of the game “5 letters”).
- Agent in python notebook for working with data and speeding up the analytical cycle.
- Agent for QA and test-case generation with an emphasis on covering risky scenarios.
- Agent for code review: search for defects, smell patterns and violations of standards.
- Vulnerability detection agent (safeliner) in the secure SDLC.
Related chapter
Programming meanings
Transition to intent-driven development and product-ML cycles.
Infrastructure and economic drivers
Economics of agency
Reducing compute costs and increasing quality of foundation models make multi-step agent scenarios practical.
Integration protocols
MCP and A2A approaches reduce the cost of connecting tools and simplify the orchestration of agent-to-tool and agent-to-agent flows.
Tool base
Practice is already moving towards specialized agents and CLI modes (for example, Claude Code, OpenAI Codex).
Management and regulation of agency
- The agent management model should set levels of autonomy and decision-making boundaries.
- Critical actions (security, prod-configs, migrations) require human-in-the-loop and traceable approval flow.
- Evaluation of the result should include not only speed, but also quality: defects, vulnerabilities, cost of rollback.
- Agent metrics become part of the DX platform and engineering management at the organizational level.
Related topic
Observability & Monitoring Design
How to build a measurable feedback loop for production platforms.
Measuring the effectiveness of assistants and agents
Productivity
Lead time, cycle time, throughput, time-to-first-PR
Quality
Defect escape rate, rework rate, flaky tests, review findings
Reliability and safety
Security findings, policy violations, rollback rate, incident impact
Acceptance by engineers
DAU/WAU, retention, share of tasks with agents, reasons for refusal
Links and materials
YouTube: AI in SDLC
Full version of the speech.
VK Video
An alternative platform for viewing the episode.
Telegram: main post
Structure of the report and key emphasis on the topic.
Previous report
An introductory framework for integrating AI into development processes.
Selection of sources (part 1)
Analysis of links to agents, cases and practices.
Selection of sources (part 2)
Continued materials for in-depth study.
AI survey in development
Study of the influence of AI on software engineering in Russia.

