Leveraging GenAI; DORA Report on AI Impact; Fostering at Scale; Code Rot; TDD with AI
Issue #46 Bytes
🌱 Dive into Learning-Rich Sundays with groCTO ⤵️
🎙️Engineering Management in the Age of GenAI ft. Suresh Bysani, DOE at Eightfold,
AI can help engineering teams move faster, but speed without solid fundamentals is a recipe for tech debt! On the latest groCTO podcast, Suresh Bysani, Director of Engineering at Eightfold, shares how engineering managers & leaders can leverage GenAI without compromising quality. From using AI for rapid prototyping to navigating legacy systems, he breaks down practical frameworks, common pitfalls, and leadership mindsets for the AI era.
Here is a quick sneak peek into the discussion and the link to the full podcast.
Article of the Week ⭐
“[…] researchers found that an increase AI adoption negatively impacts delivery stability. [They] hypothesize this is due to the larger batch size of AI-assisted code, which makes it harder to code review.“
What is the Impact of Generative AI on Software Development?
Lizzie Matusov breaks down the 2024 DORA Report with regards to AI adoption. There is a shifting trend where developers self-report positive on satisfaction as a result from using AI tools, such as code assistance, agentic usage or code review hints, but the from the business side productivity seems the same or even negative.
That said, generative AI is reshaping software development. Its real value depends less on the tools themselves and more on how teams integrate them.
According to the 2024 DORA report, while AI boosts productivity, flow, and code quality, it can also introduce instability. The most effective organizations set clear AI usage policies, reinforce engineering best practices, and give developers time to learn. Teams that adopt it thoughtfully will outpace those who don’t.
Strategies for Responsible AI Integration
The DORA report recommends several strategies to balance AI speed with sustainable team health:
Anchor AI usage to core delivery practices
Don't let velocity undermine quality. Reinforce testing, small batch sizes, and CI/CD discipline.Build tight feedback loops
AI moves fast. Make sure code reviews, test coverage, and observability are up to speed.Let developers grow into it
AI adoption peaks after ~15–20 months. Create space for experimentation and skill-building.Track meaningful metrics
Use a layered approach:Individual: Flow, satisfaction, AI usage
Team: AI trust, reliance, review times
Service-level: Code complexity, tech debt, performance
Org-wide: Customer satisfaction, delivery efficiency
⚡ Key Takeaways
AI improves dev happiness and velocity: 25% more adoption = higher flow, job satisfaction, and productivity.
Code is better, faster but riskier: Docs improve, code complexity drops but delivery stability can decline.
Biggest threat? Big batch sizes: AI enables larger changes, but harder reviews and riskier deploys.
Policies = adoption: Teams with clear AI policies saw 451% higher usage.
Measure the right things: Mix individual, team, service, and org-level metrics (e.g. flow, complexity, code review time).
Feedback loops are critical: Strengthen testing, code review, and observability to safely integrate AI into delivery.
Adoption takes time: AI reliance peaks ~15–20 months in. Give devs space to learn and grow with the tools.
📢 Surviving AI?- Stephan's Got Your Back!
AI Disruption Got You Reeling?
Stephan Schmidt, the legendary CTO coach, has launched "Survive AI"! This amazing newsletter is your essential guide to navigating the torrents of the great AI disruption, offering sharp insights and practical, actionable strategies. Known for his brilliant CTO coaching & engineering leadership thoughts, Stephan will equip you to not just survive, but truly thrive in this rapidly evolving landscape.
Stop feeling overwhelmed and start confidently navigating the AI future – Subscribe to "Survive AI” on substack now! 😉
Other highlights 👇
Shared Understanding At Scale
As teams and companies scale, so does the complexity of their systems, communication, and coordination. John Cutler’s post is a masterclass in how organizations attempt to manage this complexity and the ways in which their models, views, and rituals either help or hurt.
He explores five nuanced but interconnected themes that shape how information flows (or breaks down), how strategy is framed (or distorted), and how shared understanding can be built (or lost) as scale increases.
At the heart of it is this question:
How do you create shared clarity at scale without flattening the nuance or drowning in information overload?
John Cuttler has the answers:
1. Cascade Flattening
Goal hierarchies and strategy trees often look elegant but only a few layers are ever truly used in decision-making or daily rituals. The middle layers become dead weight.
Over-modeling for alignment leads to slideware instead of action. Collapse what doesn’t get used. Focus on the levels that get reinforced through rituals, attention, and feedback loops.
Try this: Circle the levels in your strategy/goals cascade that your team interacts with regularly. Simplify the rest.
2. Volume, Filters, and Interfaces
The problem with high-volume information isn’t volume itself, it’s the lack of structure and purpose-built interfaces. Unstructured broadcasts or overly abstracted status reporting leads to blind spots, distortions, and delays.
Try this: Build shared interfaces (like Amazon’s WBR) that allow for fast scanning, pattern recognition, and role-relevant context. Review your reporting or team rituals. Are you reviewing too much unstructured input? Could a normalized view make things more actionable?
3. No View to Rule Them All
The dream of the universal dashboard is seductive. But every view comes with trade-offs between compression and clarity. Tie views to specific jobs-to-be-done. Air traffic control works because the view is tuned to the role. Most org dashboards are not.
Try this: Reframe your team dashboard as a question: “What’s off track this month?” Or “Where are we overextended?” Then design the view accordingly.
4. The Seduction of Loops
Feedback loops are great but teams often fall into the trap of assuming all loops are cleanly nested and synchronized. They're not. Avoid building beautiful strategy-delivery-discovery diagrams that don’t reflect reality. Recognize that different loops operate on different timescales, levels of formality, and rhythms. Don’t force-fit them into one neat system.
Try this: Map 3–5 “loops” in your org (e.g. roadmap planning, quarterly strategy, sprint retros). What time horizon and people do they involve? Are any of them truly nested, or just loosely coupled?
5. Model Traps
Models help us simplify and align but they also shape what gets seen, who gets heard, and what gets prioritized. Don’t lean too heavily on simple models (e.g. MoSCoW, tiering, North Star metrics) without interrogating what they compress or exclude.
Models are useful and political. Some voices (e.g. support teams, internal tools) get deprioritized when models over-compress complexity.
Try this: Pick a model your team uses. Ask:
What does it leave out?
Who does it benefit?
Could it invite more depth or flexibility without losing clarity?
To scale shared understanding, you don’t need one dashboard or one loop to rule them all. You need:
Interfaces tuned to tasks
Models that invite reflection, not blind obedience
Rituals that reinforce clarity over time
And the humility to flatten what’s not working
Code Rot: What it is and How to Identify it
Is your codebase feeling sluggish and buggy? This blog by Typo dives into the hidden enemy of software projects: code rot. Learn what it is, the different types (active and dormant), and the surprising causes that might be creeping into your daily development. Discover the tell-tale symptoms and the serious impacts on your team's productivity and your bottom line.
Most importantly, we'll explore actionable strategies to fix and prevent code rot, including code reviews, refactoring, and the power of tools like Typo for tracking code quality. Don't let your code decay – read on to learn how to keep it healthy and efficient!
How Test Driven Development Accelerates AI Coding
By writing tests first, you're effectively prompting the AI in a precise, executable way. You're narrowing its focus, lowering cognitive load (for both you and the model), and getting immediate feedback. TDD, once a discipline for code quality and developer clarity, now doubles as a scaffolding for reliable agentic AI conversations.
TDD an effective interface for collaborating with AI
Alex highlights an interesting observation shared by many XP practioners: The XP practices and principles lend themselves well to agentic coding as well. Think of it this way: the Extreme Principles help teams teams self-manage, ie. self-prompt into a leaner mode of operation that is iterative and learning-focused. Qualities that enable XP teams to be more productive show parallels in also being applicable to driving “productivity” (if we can call it that) from AI models with regards to their accuracy.
Alex’s claim is that an experienced TDD-er can break up a coding problem into such small units that most model’s autocomplete prompts fill in as a sensible implementation, or first draft of it.
Editor note: Sounds reasonable to us, having experienced similar achievements with TDD, Event Modeling and AI-flow for production discussions. Have you tried it?
Find Yourself 🌻
That’s it for Today!
Whether you’re innovating on new projects, staying ahead of tech trends, or taking a strategic pause to recharge, may your day be as impactful and inspiring as your leadership.
See you next week(end), Ciao 👋
Credits 🙏
Curators - Diligently curated by our community members Denis & Kovid
Featured Authors - Lizzie Matusov, John Cuttler, Alex Jukes
Sponsors - This newsletter is sponsored by Typo AI - Ship reliable software faster.
1) Subscribe — If you aren’t already, consider becoming a groCTO subscriber.
2) Share — Spread the word amongst fellow Engineering Leaders and CTOs! Your referral empowers & builds our groCTO community.