Psychological Safety in Knowledge Sharing

Felipe Hlibco

A few months into my time at Google, I watched a junior engineer ask a question in a team meeting that I thought was obvious. The answer was in the documentation—technically. It was covered in the onboarding guide—sort of. Three senior engineers had the answer off the top of their heads.

What struck me wasn’t the question. It was the silence before someone answered.

Two full seconds of silence where I could feel the junior engineer regretting asking.

That silence is the whole problem.

Why People Hoard Knowledge #

The default assumption about knowledge silos is that people are lazy, territorial, or too busy to document. In my experience, those are secondary causes at best.

The primary cause is fear.

Fear of looking incompetent by sharing something “everyone already knows.” Fear of being wrong publicly. Fear of writing documentation that gets criticized for being incomplete. Fear of asking a question that reveals a gap in understanding.

Amy Edmondson calls this absence of psychological safety—the shared belief that the team is safe for interpersonal risk-taking. Her research, and Google’s own Project Aristotle study (which found psychological safety was the single strongest predictor of team effectiveness), keeps pointing to the same conclusion: teams don’t fail at knowledge sharing because of bad tools. They fail because sharing feels risky.

I’ve seen this pattern at every organization I’ve worked at. The flavor varies. At a startup, the fear is “if I admit I don’t know this, they’ll realize they hired the wrong person.” At a larger company, it’s “if I document this poorly, it’ll reflect badly on me.” At a high-performance team, it’s “everyone else seems to get it, so asking would mark me as the weak link.”

The outcome is identical regardless of the flavor: people sit on knowledge rather than risk sharing it.

The Compounding Cost #

Knowledge hoarding doesn’t just slow things down. It creates invisible technical debt that compounds in ways that are hard to trace back to the root cause.

When one engineer holds critical context about a system and doesn’t share it—for whatever reason—the team develops a dependency on that person. Not a healthy dependency. An invisible one. Everything works fine until that person goes on vacation, changes teams, or leaves the company. Then you’re staring at a system nobody fully understands, wondering how you got here.

Repeated mistakes are the other cost. If a team doesn’t have a culture of sharing post-incident learnings—real learnings, not sanitized summaries—they’ll hit the same bugs, make the same architectural miscalculations, and wonder why certain problems keep recurring. The knowledge existed to prevent the second occurrence; it just never made it from the person who learned the lesson to the person who needed it.

The blameless postmortem practice that came out of SRE culture is one of the better countermeasures I’ve encountered. When the explicit agreement is “we’re examining the system, not the person,” people share more honestly. The incident report captures what actually happened rather than a curated version designed to minimize individual blame.

But even blameless postmortems only work if the culture genuinely supports them. I’ve sat in “blameless” reviews where the language was blameless but the body language wasn’t. People pick up on that discrepancy instantly.

What Leaders Need to Do Differently #

Psychological safety isn’t created by policy. You can’t mandate it in a wiki page. It emerges from consistent behavior over time—specifically, the behavior of the people with the most positional power in the room.

Model vulnerability publicly. When I don’t know something, I say so. In meetings, in code reviews, in Slack. “I don’t have context on this—can someone catch me up?” is a sentence that costs a leader almost nothing in credibility but pays enormous dividends in normalizing knowledge-seeking behavior. If the manager can admit a gap, the junior engineer can too.

Respond to questions with generosity. This sounds soft, but it’s the mechanism. When someone asks a question, the response they get determines whether they’ll ask the next one. A sigh, an “it’s in the docs,” a redirecting “you should know this”—these responses shut down future questions far more effectively than any stated policy encourages them.

At Google, the internal culture around code readability reviews is instructive. Reviewers are trained to frame feedback as “what about…” rather than “you should have…” The distinction is subtle but it creates a fundamentally different emotional experience for the person receiving feedback. They learn instead of defending.

Make knowledge sharing the default, not the exception. Architecture Decision Records (ADRs) shouldn’t require a heroic act. They should be a normal, low-ceremony part of making decisions. “We chose Postgres over DynamoDB because X, Y, Z” takes 15 minutes to write and saves hours of “why did we pick this?” conversations over the following year.

The key is reducing the activation energy for sharing. Every additional step between “I have knowledge” and “that knowledge is accessible to the team” is a friction point where fear can intervene. If documentation requires a formal review process, people will skip it. If sharing a rough draft is acceptable, they’ll do it more often.

The Uncomfortable Truth #

I want to be honest about something. Building psychological safety is slow, fragile, and can be undone by a single bad interaction. One dismissive comment in a meeting can undo months of trust-building. One public blame session after an incident can silence a team for quarters.

That asymmetry is uncomfortable. It means leaders have to be consistent at a level that’s genuinely demanding. You can’t have a bad day and take it out on someone’s pull request. You can’t be generous in one-on-ones and dismissive in group settings. People calibrate their safety not to your best moments but to your worst ones.

I don’t have this figured out. I’ve caught myself being short with a question when I was stressed. I’ve written code review comments that were technically accurate but tonally harsh. Every manager has these moments; the question is whether you notice them and correct course, or whether they accumulate into a pattern.

The teams that share knowledge freely—that document willingly, ask questions confidently, and learn from failures openly—aren’t teams with better documentation systems. They’re teams where people genuinely believe they won’t be punished for honesty. Building that belief is the hardest and most important work a leader does.