A shared component sits between two engineering teams at a mid-size software company. Neither side owns it. Security patches have been blocked for weeks behind an ownership discussion that nobody wants to have. Meanwhile, a product manager used an AI tool to fix a customer-facing bug in a related workflow. The fix was correct. She submitted a merge request. The owning team rejected it: "you don't understand the context." She fired back: "then write your principles down so people can follow them." Neither side was wrong. Both were operating in an organisation that had never defined who owns what, or how outsiders contribute.
Every CTO I talk to right now wants to know how to "do AI" in their engineering organisation. The honest answer is usually uncomfortable: the reason AI-assisted development is creating friction isn't AI. It's organisational debt that was already there. Unclear boundaries. Unowned systems. No rules for cross-team contribution. AI didn't create these problems. It just pressure-tested them at a speed that made them impossible to ignore.
This is an article about organisational hygiene, not AI tooling. Three gaps keep showing up in the engineering organisations I work with. They predate AI, microservices, and agile. But the pace of modern development, AI-assisted or otherwise, has made them urgent in a way they never were before.
Boundary clarity
Most engineering organisations grow their team structure organically. Teams get named after products, projects, technical layers, or whatever was urgent when the team was created. Over time, those names calcify into identity. Nobody revisits whether the boundaries still make sense.
The result is that teams end up defending territory based on labels rather than actual responsibility. "That screen is in our product, so it's our work." "That data flow runs through our team, so you need our sign-off." Work gets blocked, duplicated, or fought over, and the arguments feel principled when they're actually just confusion about who owns what.
Naming is a symptom. The real problem is that boundaries are drawn at mixed levels of abstraction. Some parts of the org are built around products. Others around technical capabilities. Others around business functions. These are different things. A product contains multiple capabilities. A capability cuts across multiple products. When you mix these levels in your org structure, nobody can reason about where a piece of work belongs, and every ambiguous case becomes a political negotiation.
I've watched this cost real time and money. At one organisation, a cross-cutting regulatory function was placed inside a product team because it was "convenient." The result was six months of friction as the regulatory team tried to make changes in other products' codebases and got blocked because the org chart implied they didn't belong there. A feature that should have taken weeks took a quarter. At another company, a shared component ended up orphaned between two teams. Neither maintained it. Security patches sat unmerged for weeks, and when a vulnerability was eventually exploited, the incident response took three times longer than it should have because nobody knew who was responsible.
One engineering manager described spending years trying to explain why their team legitimately needed to modify screens inside another team's interface. The other team saw it as an incursion. His team saw it as their responsibility. Both were right, given the ambiguous structure they'd been handed.
There's no single correct way to draw these boundaries. Domain-oriented structures tend to age better than product-oriented ones, in my experience, because domains are more stable than product lines. But the specific approach matters less than the consistency. Pick a level of abstraction and stick with it. Make sure every team can answer "what do we own and what don't we own?" without a twenty-minute debate. That clarity is what lets people reason about where work belongs, whether the work comes from a human or an AI agent.
In practice, this is hard because boundaries become identity. Restructuring feels like an attack on the people inside the structure. But the cost of not doing it is perpetual territorial friction that has nothing to do with technical constraints and everything to do with organisational ambiguity.
The ownership layer everyone skips
Organisations invest heavily in product ownership: who decides what to build. They invest in team structure: who builds it. But they rarely invest in technical ownership: who keeps it running, who responds when it breaks, who updates it when dependencies rot.

This is the layer that makes "you build it, you run it" actually work. Without it, you get the symptoms that every senior operator recognises. Shared services that nobody maintains. Production alert channels where notifications pile up and nobody responds. Security tooling that flags dozens of unowned projects. Dependency upgrades that stall for months because the shared library "belongs to everyone," which means it belongs to no one.
I worked with an organisation where a critical shared service sat between two teams. When I asked who would be called if it went down at 3am, the answer was a shrug. During a separate incident, engineers in a morning standup were building new features while a system they partially depended on was in a degraded state. They didn't consider it "theirs." Technically, it wasn't. But nobody else considered it theirs either. That incident lasted four hours longer than it needed to because the first thirty minutes were spent figuring out who should even be looking at it.
The worst version of this I've encountered was a major version upgrade of a shared framework dependency that cut across multiple teams. Nobody owned the upgrade. The technical work was perhaps two weeks of effort. The actual elapsed time was closer to four months: coordination overhead, finger-pointing, blocked releases, and three separate escalations to senior leadership before anyone committed to doing the work. One engineer described it as "a bloodbath." Two others cited it as a reason for leaving the company.
Now layer AI on top of this. An AI agent can generate a perfectly functional new service or a dependency update. It cannot own what it creates. It cannot respond to a 3am page. It cannot make the architectural trade-off call when a migration path has two viable options. If nobody is named as the owner before AI-generated code ships, nobody will be the owner after. The ownership gap doesn't just persist. It scales.
What technical ownership actually requires
Every service, component, module, and shared dependency needs a named owner. Not a team that "sort of looks after it." A named, documented, auditable owner responsible for keeping it running, responding to incidents, updating dependencies, and maintaining an architectural direction for that component. This lives in a service catalogue, in repository metadata, in runbooks. Not in someone's head and not in a meeting note from six months ago.
This doesn't require your technical ownership lines to match your domains perfectly. It just requires that somebody is unambiguously responsible. If you separate dealing with the consequences of your choices from making the choices, you will not make better choices.
The question this answers is simple: whose system is this? If you can't answer that for every service in your estate within thirty seconds, you have a gap. And that gap will bite you hardest exactly when you can least afford it: during incidents, during upgrades, and during the kind of rapid change that modern development demands.
Contribution norms
Cross-team code contributions have always been a social problem, not a technical one. Engineers have always submitted fixes to other teams' repositories. Data analysts have always written scripts that touch production. The friction is old. What's new is the volume. AI tools have dropped the barrier to producing plausible code to near zero, and the number of contributions arriving at a team's door has increased accordingly. Informal norms and goodwill don't scale.
Open-source communities solved this decades ago with maintainers, contribution guidelines, and clear governance for who merges what. Most engineering organisations have never built the internal equivalent with any rigour. Three principles make it work:
Clear ownership. Every piece of code has a team responsible for maintaining and monitoring it. Without this, there's nobody to accept or reject a contribution. This is the prerequisite.
Team authority. You can't have accountability without authority. The owning team sets the quality bar, defines the process, and can decline contributions that don't meet standards or that they lack capacity to integrate. If you hold a team responsible for a system but strip their ability to control what goes into it, you've created an impossible job.
No shadow code. Any code that runs in production must live in a version-controlled repository with a named owner. AI-generated code, scripts, automations, "temporary" fixes: if it can't be seen, it can't be maintained.
The PM in the opening story wasn't wrong to submit a fix. The engineering team wasn't wrong to push back. Both were operating without rules. Define the rules, and the argument resolves itself. This matters more as AI agents start submitting contributions at scale: without explicit norms, every agent-generated merge request becomes a fresh political negotiation rather than a routine process.
The real AI constraint
Here is what the AI productivity conversation keeps missing: producing code is now faster than ever, and more people (and soon, more agents) can do it. But a team still needs to review, understand, integrate, and maintain that code. That part hasn't gotten faster.
If your boundaries are unclear, contributions hit the wrong team. If your ownership is undefined, code ships without anyone responsible for keeping it running. If your contribution norms don't exist, every merge request is a fresh argument. These problems existed before AI. AI just runs them at a speed where you can no longer muddle through.
The real question is not 'how do we generate more code?' It's 'how do we make sure the speed of creating code doesn't outrun the organisation's ability to put it into production sustainably?'
The organisations that will absorb AI-assisted development well are not the ones with the best tools. They're the ones where boundary clarity, technical ownership, and contribution norms are already in place. The plumbing matters more than the pump.
Three questions, not a framework
These are three questions every engineering organisation needs to be able to answer, and most can't:

- Whose problem is this? Boundary clarity. Which team is responsible for that part of the business?
- Whose system is this? Technical ownership. Who maintains the code, responds to incidents, updates dependencies?
- How do we work across boundaries? Contribution norms. What happens when someone outside the owning team needs to make a change?
They affect each other, but you can fix one without redesigning everything else. A boundary restructuring doesn't have to mean a technical ownership reshuffle. New contribution norms don't require redrawing boundaries.
A quick diagnostic
Score your organisation honestly:
- Can a new engineer look at your org chart and correctly predict which team owns a given business capability? Or do they need to ask three people and get three different answers?
- Can you identify the owner of every service and shared component in your estate within thirty seconds? Or are there orphaned systems that "belong to everyone"?
- When someone outside a team submits a code change, is there a clear process? Or does it depend on who knows whom and who's feeling generous that week?
Most organisations I work with score poorly on at least two of three.
The value of getting these right extends well beyond AI readiness. Reorganisations become tractable because you know what belongs where. Migrations become possible because you know who owns what. Incidents resolve faster because ownership is unambiguous. New team members ramp up faster because the boundaries are legible. These are the basics of a well-run engineering organisation. AI just makes the cost of not having them impossible to ignore.
Start before it's urgent
If I had to pick a sequence, I'd start with boundary clarity, because everything else depends on knowing whose problem something is. Then technical ownership, because you can't hold teams accountable for systems they don't explicitly own. Then contribution norms, because they only work when ownership is already clear. But that's a starting point, not a prescription. Your context will dictate what hurts most.
The temptation is to wait. The restructuring feels too big. The ownership audit feels too tedious. The contribution norms feel premature because "we're not really doing AI development yet." But the organisations that wait until AI-generated contributions are flooding their merge request queues will be building these foundations under pressure, which is the most expensive and disruptive time to do it.
Start now. Pick the gap that's causing the most friction. Spend a week making it explicit. You don't need a transformation programme. You need clarity, written down, that people can point to when the next territorial argument starts.