The Onboarding Problem Nobody Talks About
AI has taken over the small tasks that used to teach new engineers how a system works. Leaders who don't rebuild the learning function will find out in eighteen months.
An engineer I worked with sat down next to a new hire during an incident review a few years ago, in a regulated environment where a wrong log line could cost the company a seven figure fine. The new hire had shipped a fix two weeks earlier. The fix was now the reason we were in a room at 11pm.
The senior engineer asked the question I've heard some version of my whole career. "Why did you think this would hold?"
The new hire pointed at the PR. The logic was tight, the tests passed, three approvals sat on the review. What he couldn't do was explain what the code around his change was actually protecting. He didn't know why a validation check three files over existed. Nobody had told him what the service upstream did when it saw a null there. He had fixed the symptom. The invariant he broke wasn't visible in the diff he was looking at.
That conversation stayed with me because the engineer was sharp. He'd come from a good program. He was moving faster than any junior I'd managed in the past ten years. And he didn't understand the system he was changing.
The reason is quieter than most leaders want to admit. Over the last few years, the small tasks that used to teach new engineers the codebase have mostly stopped being theirs. AI handles them. Bug triage, the three line fix, the obvious refactor, the "read this module and tell me what it does" exercise. All of it moves through a code assistant now, and the output is usually good enough to ship.
The new hire's first month used to look like slow, uncomfortable archaeology. Read old code. Ask three people what it does. Write a doc nobody reads. Fix a tiny thing and watch it break in staging. That slog wasn't the onboarding. It was the curriculum. Nobody wrote it down, nobody scheduled it, nobody put it on the OKRs. It happened because the work was the only way to get things done, and along the way a mental model of the system got built in someone's head.
Take away the slog and the curriculum goes with it. You still get the fix. You lose the model.
The first few months feel great. The new hire ships more tickets than a comparable hire did three years ago. Managers look at throughput and conclude ramp is faster. Some claim they've cut it in half. And they have, for a definition of ramp that stops at first merge.
The problem is that first merge measures the wrong thing. Onboarding is supposed to produce someone who can diagnose a production issue at 2am without rolling the dice. That capability is built over a few hundred hours of frustration with code the engineer didn't write and can't change. A month of well-completed tickets does not get anyone close to it.
About eighteen months out is where I've seen this bill come due. A senior leaves. A subsystem they built starts misbehaving in a way the dashboard doesn't catch. The mid-level engineers reach for the assistant and get a plausible fix. The fix ships. A week later the same behavior surfaces somewhere else, because the root cause was a shared assumption in a library nobody on the team has actually read.
In a regulated setting, this is worse than slow. It's auditable. When the regulator asks why a control failed, "our engineer relied on an AI suggestion and didn't understand the underlying invariant" is not an answer that keeps your license.
So the question is what to do about it, and I'd rather be concrete than abstract.
The first thing I've seen work is treating code review as teaching rather than as a quality gate. Reviews should cost time. They should go longer than the change would seem to require. Ask the author to walk through the surrounding code, not just the diff. Ask them what would break if their change shipped wrong. If they can't answer, that is the work, not a side conversation. The reason this part has to be a human is that the value is the new hire forming a model in their own head, out loud, in front of someone who already has one. An assistant can produce a plausible explanation of the surrounding code in seconds. The new hire reads it, nods, and walks away with the same gap they came in with. The reviewer's job is to create the friction that turns reading into understanding, and that friction only exists when someone on the other side of the table can tell whether the explanation is real. In past roles, I had a standing rule that any engineer in their first six months got at least one review per week where the reviewer asked them to explain a function they didn't write. It slowed us down by a few percent. It was the single highest return investment I ever made in a team.
Architecture walkthroughs help, but they can't be slide shows. Pick a real incident from the last quarter. Put a senior at the whiteboard. Have them reconstruct what happened, including what they thought at each step and why they were wrong. Two hours. Open questions. No AI in the room. This is the closest thing I've found to rebuilding the reading-old-code muscle when the code is not being read organically anymore.
The move that gets the most resistance is a deliberate no-AI window during onboarding. Not for the whole ramp. For specific exercises: read this module and summarize it, debug this issue with the assistant off, pair with a senior and narrate your reasoning. Two weeks of this, spread over the first quarter, is usually enough. The new hires push back at first. The ones who take it seriously tend to become the people on-call rotations can actually count on.
The flip side worth naming is that AI does work very well in teams that already have discipline. Architecture Decision Records, post-incident write-ups, runbooks people actually maintain. That body of context is what an assistant needs to give answers that hold up under load, because the model is grounded in the team's actual history instead of guessing from the diff in front of it. The teams that get the most out of these tools were already the ones writing things down. The teams that struggle hardest are the ones that never wrote any of it down, hoped the assistant would compensate, and got plausible answers that were wrong in ways nobody caught in review. Discipline is the precondition for AI to help. An assistant cannot read what the team never wrote.
A pattern I had to correct in myself was letting AI quietly absorb the friction that had always been part of growing engineers. It looked like progress. It was progress, for the visible work. What it was not doing was building the second layer, the part I used to take for granted because it happened whether I designed for it or not. Once I stopped assuming the curriculum would emerge from the work, I started scheduling it back in.
The leaders who figure this out in the next year or two will have teams that can still debug at 2am. The ones who don't will discover, eighteen months from now, that they've built a generation of engineers who can ship features and can't hold a system in their heads. This is a design choice someone, somewhere, is quietly making every day by assuming onboarding still works the way it used to.
It doesn't. Name the curriculum, or lose it.



