Engineering vs Project Management; Improving AI Products; Effective Delegation; Bad Management; Is it "Done"?
Issue #48 Bytes
🌱 Dive into Learning-Rich Sundays with groCTO ⤵️
Engineering Vs Project Management
Ever wondered why some engineering teams thrive while others struggle? The secret might lie in understanding the distinct roles of Engineering and Project Management. This blog provides the clarity you need, highlighting their different focuses, from long-term strategy to immediate delivery.
Don't let confusion cost you – read on!
Article of the Week ⭐
Here’s our agent architecture – we’ve got RAG here, a router there,…
[Holding up my hand to pause the enthusiastic tech lead.]
“Can you show me how you’re measuring if any of this actually works?”
A Field Guide to Rapidly Improving AI Products
Hamel Husain’s Field Guide is a distilled blueprint for building real-world machine learning systems. Rather than focusing on academic ML or research-grade solutions, this guide is grounded in hard-won lessons from deploying ML at scale. His guide is meant for teams shipping ML alongside real-world product engineering.
TL;DR
Ship the loop, not the model. Build end-to-end pipelines early even with dummy models.
Obsess over the data. Most applied ML is won or lost in the data layer.
Start small. Scale later. Most teams fail by over-engineering in the first six months.
Measure everything. Log prediction outputs, monitor performance shifts, and instrument user feedback loops.
Cross-functional or bust. Successful ML teams involve PMs, engineers, domain experts, and ops from day one.
1. Error Analysis
Before anything else: define a narrow, measurable problem. Jumping into modeling without this clarity is a path to waste. Avoid vague problem definitions like "detect anomalies" or "optimize engagement and focus on creating clear and consistent reward function.
Key Practice: Align all stakeholders on what success looks like before you write a line of code.
2. A Simple Data Viewer
The majority of ML failures stem from data, not models. Data is often incomplete, unlabeled, noisy, or undocumented. Make sure you have adequate tooling and visibility into your data at all stages.
Key Practice: Treat data like a first-class product: build data validation, contracts, labeling pipelines, and tracking systems early.
3. Empower Domain Experts To Write Prompts
Don’t overfit the problem with a complex solution if a simple one will do. Simple models are more interpretable, easier to debug, and less brittle. Prompts are plain english, so connect the dots early towards all bodies of work that do some kind of planning, documentation or architecture analysis.
Key Practice: Use fancy models only when simpler baselines (like logistic regression or heuristics) fail meaningfully.
4. Bootstrapping with Synthetic Data
Fast feedback loops are more valuable than premature optimization. Avoid waterfall-style ML, where modeling is done in isolation and handed off late, create data where you need it and run your pipeline against reallistic-looking examples rather than in “lab environments”
Key Practice: Get end-to-end pipelines working early with test data, then iterate with tight loops and shared ownership between data and product teams.
5. Maintaining Trust In Evals Is Critical
Models interact with humans, systems, and business processes often in messy ways. Ensure that grading is done very strictly at first with plenty of human intervention, and scale it down over time only if the accuracy increases.
Key Practice: Build interfaces for monitoring, observability, and feedback from users. Plan for model decay, and integrate retraining workflows into your ops culture.
6. Count Experiments, Not Features
It’s easy to build something too complex to maintain. Once a complex model lands in prod, teams often hesitate to touch it, turning it into untouchable legacy. Share your failures and learnings to keep the investment into tooling and infra transparent.
Key Practice: Prioritize maintainability and tooling early to help evaluate the LLMs accuracy over time. Version everything, build rollback paths, and default to observability-first practices.
📢 Surviving AI?- Stephan's Got Your Back!
AI Disruption Got You Reeling?
Stephan Schmidt, the legendary CTO coach, has launched "Survive AI"! This amazing newsletter is your essential guide to navigating the torrents of the great AI disruption, offering sharp insights and practical, actionable strategies. Known for his brilliant CTO coaching & engineering leadership thoughts, Stephan will equip you to not just survive, but truly thrive in this rapidly evolving landscape.
Stop feeling overwhelmed and start confidently navigating the AI future – Subscribe to "Survive AI” on substack now! 😉
Other highlights 👇
How to Delegate Effectively 🤝
Delegating Done Right
Most leaders delegate poorly not because they’re bad at it, but because they skip the groundwork. People think delegation is about offloading work. The real purpose is to increase your team’s capacity to solve problems without you.
If you’re constantly being pulled into decisions, you're just distributing assigned tasks, robbing the team of their ability to express their autonomy and talent. Effective delegation gives people the context, ownership, and support to operate independently.
Build the Right Conditions First
You can’t delegate well in a broken system. Before handing off work, make sure:
Vision is clear: Everyone understands the mission and what matters.
Roles are defined: People know what they’re responsible for.
Trust exists: You believe in their judgment, and they believe in yours.
Without these, delegation leads to confusion, micromanagement, or dropped balls.
Adjust the Support You Give
Not everyone needs the same kind of help. Some people need structure and clarity. Others need autonomy. You have to read where each person is and support them accordingly. Delegation fails when you give too much or too little support.
Check in to remove blockers, not to take over.
Delegate Context, Not Just Tasks
Effective delegation isn’t just handing off a to-do.
It’s sharing the bigger picture:
Why this matters
What success looks like
What’s flexible and what’s not
When people have this context, they can make smarter decisions on their own.
🐵 Having the monkey on your back is a parallel from a well-known Harvard Business Review for having the initiative on yourself. You are the one who has to take the next step: the ball is in your court. You should minimize the time monkeys are on your back, and figure out how to return them asap to your reports.
Bad Management is Everywhere
Allen Holub cuts deep into a problem many engineering leaders quietly nod along to: bad management practices that get normalized, especially in tech.
“Because that’s how it’s always been” isn’t a good reason
Bad management is everywhere.
And the worst part? Most people don’t even see it as bad.
Allen calls out the normalized dysfunction:
PMs handing down “the plan” with no room for feedback
Managers confusing controlling with leading
Teams overloaded with process, but underpowered to say no
It’s not that these managers are evil. It’s that they were taught the wrong things and nobody ever stopped to question it.
‘I don't know how many retros I've sat in where a critical improvement couldn't be made because "they'll never let us do that."‘
Real leadership starts with being willing to question the defaults.
What “Done” Should Actually Mean on Product Engineering Teams
Many engineering teams use ‘done’ too loosely, typically defined as ‘merged to main’ or ‘ready to deply.’ But this definition misses the point. Shipping to main doesn’t guarantee value. It doesn’t confirm if the feature works as intended, meets user needs, or actually delivers business impact. Avid gtoCTO readers will recognize Matt Watson’s lean product engineering style in his latest article to help you stay focused.
You’re Not Done If You Don’t Know It Worked
A feature isn’t truly “done” until it:
Is live in production
Has been used by real users
Has produced some level of feedback
Without these, you're shipping coding output, not outcomes.
One big difference between startups and enterprises? Feedback speed.
At a startup, you ship in the morning and hear from users by lunch.
At a big company? Silence, unless something breaks.
This trains teams to think no news is good news and miss the point of impact.
Matt references a recent story by Raechel Boston who flipped the script: she hunted down feedback, surfaced wins, and brought them back to the team. Quiet work, but powerful.
Now that AI infected how devs ship, this matters more than ever.
Fast is only good if you’re shipping the right things.
Without feedback loops, your AI advantage is just a faster way to get lost.
So ask your team: Did this make something better for the customers?
If the answer is “I don’t know,” you’re not done.
Find Yourself 🌻
That’s it for Today!
Whether you’re innovating on new projects, staying ahead of tech trends, or taking a strategic pause to recharge, may your day be as impactful and inspiring as your leadership.
See you next week(end), Ciao 👋
Credits 🙏
Curators - Diligently curated by our community members Denis & Kovid
Featured Authors - Hamel Husain, Matt Watson, Allen Holub, Luca Rossi
Sponsors - This newsletter is sponsored by Typo AI - Ship reliable software faster.
1) Subscribe — If you aren’t already, consider becoming a groCTO subscriber.
2) Share — Spread the word amongst fellow Engineering Leaders and CTOs! Your referral empowers & builds our groCTO community.