Let’s dive into the best handpicked stories, starting with ⤵️
CTO Diaries
Sebastian Heide-Meyer zu Erpen has been our guest and influence for this week’s CTO Diaries. Sebastian is a VPE at tonies, and also a tech mentor with an inclination towards lean outcome-based agility and technical excellence.
Developers feeling unaligned, burned out or demotivated - isn’t this what we have felt as an engineering leader in most of our tech catchup calls?
Lack of Alignment with Company Vision
Symptom: Disconnected developers.
Root Cause: Poor communication of company vision.
Burnout
Symptom: Exhaustion and decreased productivity.
Root Cause: Excessive workload and unrealistic deadlines.
General Demotivation
Symptom: Lack of interest and enthusiasm.
Root Cause: Monotonous work and limited growth opportunities
Actionable Strategies
Clear Communication of the Vision: Regularly use storytelling to illustrate how developers' work contributes to business goals.
What We Did: Monthly all-hands meetings to bridge the gap between strategic goals and daily tasks.
Promoting Work-Life Balance: Flexible hours and encourage breaks and vacations to prevent burnout.
What We Did: Introduced 'No meetings day' and 'No dev day' to promote recharging.
Providing Growth Opportunities: Offer training and clear career paths to keep developers engaged.
What We Did: A buddy program for role-shadowing to inspire leadership.
Fostering a Collaborative Culture: Encourage team-building and cross-functional collaboration.
What We Did: Formed cross-functional 'Pods' to enhance understanding and innovation.
Recognising and Rewarding Contributions: Regularly acknowledge achievements and use peer-to-peer recognition.
What We Did: Utilised a gamified peer recognition platform, 'HuddleUp'.
Ensuring Meaningful Work: Align tasks with developers' strengths and rotate projects to maintain interest.
What We Did: Monthly project swaps to balance workload and maintain engagement.
Loving our content? Must share it with your fellow tech readers ❤️
“AI is not coming to solve all our problems and write all our code for us—and even if it was, it wouldn’t matter. Writing code is but a sliver of what professional software engineers do, and arguably the easiest part. Only we have the context and the credibility to drive the changes we know form the bedrock for great teams and engineering excellence.“ —Charity Majors, Honeycomb CTO
Article of the Week ⭐
🦿Generative AI Is Not Going To Build Your Engineering Team For You
You may know
from her CTO mission working with Honeycomb, the observability provider. She delightfully shares her story about how she climbed the self-taught tech industry ladder, and such ladders are vanishing as the industry matures.A key maturity lever—she stipulates—is that of AI, notably generative AI built on top of LLMs to produce written content for software engineers. After all, most of our work is producing text to a screen—it’s the compiler or the environment that changes. Most code is written english, give or take.
It’s easy to generate code, and hard to generate good code
Charity describes the industry to be closer to apprenticeship nowadays with Senior Engineers, Managers, Coaches and a hint of AI to help them grow:
The software industry has matured, increasing the prerequisite knowledge and experience needed to enter.
Practical paths into the industry now include coding bootcamps alongside traditional CS degrees.
Software engineering is primarily learned on the job through hands-on experience, making it an apprenticeship industry.
How working engineers really use generative AI?
GenAI tools are superb for generating code en masse. Especially the boring tedious bits. It falls short on quality, however. Complex ideas and behaviours are difficult to express in a prompt. That’s the main difficulty with software: expressing what we want the software to do.
Without deep, tacit knowledge on domain solutions and prompting finesse, the act of massaging GenAI’s to give the desired prompt is a task of similar complexity compared to writing the code yourself. In that regard it is similar to a junior engineer—or less than.
Remember, an AI is not part of your team. It won’t learn, it won’t support you, ask you how you are doing. It’s there to be prompted and perform a task. It will not autonomously grow and take on issues without being directed, nor should you trust it to.
Welcome to groCTO. Every week, we publish a handpicked articles, along with market tech updates, podcasts on engineering culture. As a curious CTO, your inbox is easily cluttered with posts that are irrelevant to you, but anymore!
By subscribing, you empower us to deliver impactful content that supports CTOs , VPE’s, tech enthusiasts within the industry.
The bottleneck we face is hiring, not training
Charity believes the main issue is giving engineers their first job. Rather than taking them on and training them to become well-experienced seniors, companies outsource and expense this responsibility to bootcamps and lower forms of academia that do not face real-world challenges. Often even expensing this responsibility on the inexperienced candidate.
AI is not coming to solve all our problems and write all our code for us. […] Great teams are how great engineers get made. Nobody knows this better than engineers and EMs. It’s time for us to make the case, and make it happen.
Other highlights 👇
📊Gartner® Report on Software Engineering Intelligence Platforms 2024
“Software engineering intelligence (SEI) platforms as solutions provide software engineering leaders data-driven visibility into the engineering team’s use of time and resources, operational effectiveness, and progress on deliverables” - Gartner®.
Software engineering leaders face pressure to demonstrate team value through data insights, despite challenges of data fragmentation. According to Gartner®, adoption of SEI platforms is expected to surge from 5% in 2024 to 50% by 2027, driven by the need for enhanced productivity. Check out the full report summary by Typo :
🎲 Estimation does not help us learn how well our software will meet its aims; putting software into use does.
Haven’t we learned our lesson?—Is what crosses my mind when I read
’ article. I cannot help nodding with his notion of the vast problems estimations carry:It is time that can be spent doing something else that is more valuable and aligned with making meaningful progress.
Delivers less value than assumed.
Decays quickly - i.e. starts inaccurate, and accuracy worsens quickly as we learn more.
Used to set deadlines and incentivise over-working.
It feels like a direct response to questions from leadership, stakeholders and collaborators.
It is inaccurate and incomplete. People try to get better at estimation, and you can, but not enough to make it useful for what estimates are being ‘hired’ to do.
So, if software estimation is not the best approach in most cases, what are the alternatives? Daniel seems to be aligned with modern Continuous Delivery practices:
🔪 Working smaller - slicing,
⛈ Forecasting,
🫳 Learning by doing,
🔐 Apply the most relevant practice to the problem at hand,
🥛 Move from how little we can spend to what we would be the most.
Laugh it Off 🤡
P.S. Do you have any other puns, you'd like to share? Share them in the comment section and let's keep the laughs going! We will publish in the coming weeks and tag you on our LinkedIn channel.
💬 Using AI to encourage best practices in the code review process
I have deep respect for
’s diligence in reading dry and data-rich white papers to bring us the best and freshest takes from big tech and adjacent industries. The paper he dug up from the archives for us highlight’s Google’s approach to AI-Assisted Code Reviews. Among the authors is also Goran Petrović who is an avid mutation tester at Google.Called AutoCommenter, it was developed by Google to help sift through their diffs and detect violations of best practices. Every day, tens of thousands of changes to Google’s codebase go through the review process and tens of thousands of developers participate in the process, as both code authors and reviewers.
The paper highlights the details on how the models were trained, introduced to teams, data refined, etc.
summarises the key take aways to help inspire you whether you need something similar:The code review process is expensive: Investing in making it more efficient is worthwhile. Especially at Google scale.
How the model works: The model receives a prompt and the source code. The model identifies rule violations, providing a URL to the relevant best practice document, or returns an empty target if no violations are found. It also includes a confidence score for its findings.
AutoCommenter analyses code and posts comments on violations: Developers and reviewers can interact with these comments using feedback buttons, including thumbs up/down for usefulness and a "Please fix" button for significant issues that must be addressed before merging the code.
Google rolled out AutoCommenter to all its developers over a year. They deployed it in stages: first to the paper’s authors for a month, then to an early adopter group of 3,000 volunteers for about a year, followed by half of all developers, and finally to everyone.
That’s for Today!
Whether you're hustling with your side projects, catching up with the latest technologies, or simply relaxing and recharging, wish you a lovely day ahead.
See you next week, Ciao 👋
Credits 🙏
Curators- Diligently curated by our community members Denis & Kovid
Sponsors- This newsletter is sponsored by Typo AI - Ship reliable software faster.
1) Subscribe — If you aren’t already, consider becoming a groCTO subscriber.
2) Share — Spread the word amongst fellow Engineering Leaders and CTOs! Your referral empowers & builds our groCTO community.