Skip to content
JOSH WEAVER
AlignMapInteractive Tool

Making Invisible Organizational Structures Visible

An interactive network visualization mapping the connective tissue between a nonprofit and its agency partner — 14 nodes, 44 connections, built in React and SVG as part of IDEO U’s Human-Centered Systems Thinking course. The tool makes invisible organizational structures queryable, comparable, and honest.

IDEO U · Systems Thinking · Jan 2026
14
Nodes
44
Connections
2
Versions
01

About the Tool

lignMap is a custom-built interactive visualization — pure React and SVG, no charting libraries. Every node, connection, and interaction is rendered from data, which means the map isn’t just a picture of a system. It’s a queryable model.

You can hover over any node to see its connections illuminate while everything else dims. Click to open a detail panel with diagnosis and recommended intervention. Filter by alignment type to see only the Robust connections, or only the Weak ones. Toggle between the v1 baseline (my original assessment) and v2 (post-interview), and watch the connections shift — upgrades glow green, downgrades glow red, and the structural story of what changed becomes immediately visible.

The design decision to build from scratch rather than reach for a charting library wasn’t aesthetic — it was methodological. Network visualizations built on top of D3 or Vis.js make assumptions about layout, interaction, and hierarchy that encode a particular theory of how systems work. I wanted a tool where every visual choice was a deliberate analytical claim. The position of nodes, the thickness of connections, the color logic — all of it maps to a specific framework for understanding organizational alignment. When you change the data, the visualization doesn’t just update. It argues.

The version comparison isn’t just a feature. It’s the methodological point: your first map is a hypothesis. The interview tests it. The v2 map is what you actually learned.

02

The Problem

spend my career inside agency-client relationships. I’ve seen brilliant strategy die in the gap between organizations — not because anyone failed, but because the connective tissue between teams was never made visible, never interrogated, never designed.

This is the problem that most organizational consultants never name directly: the relationship between a nonprofit and its agency partner isn’t one relationship. It’s a network of dozens of relationships, each operating at a different level of alignment, a different cadence, and with a different degree of shared understanding about what success looks like. Some of these relationships are robust — genuine co-strategizing with mutual accountability. Some are functional but fragile, held together by one person’s institutional memory. And some barely exist at all, creating structural gaps that nobody notices until a campaign underperforms and the post-mortem reveals that two teams were optimizing for entirely different outcomes.

Why Existing Tools Fail
Org charts show hierarchy, not function. RACI matrices show responsibility, not alignment. Stakeholder maps show influence, not the quality of connection. None of them capture what actually determines whether an agency-client partnership produces excellent work: the degree to which the people doing the work share context, share priorities, and share a definition of what good looks like.

When I enrolled in IDEO U’s Human-Centered Systems Thinking course, I wanted to apply systems mapping to something I knew intimately: the operational relationship between a mission-driven nonprofit and its agency partner. Not as abstraction, but as an honest reckoning with how these systems actually function — where alignment is real, where it’s assumed, and where it’s entirely absent.

03

The System

The map represents a fictionalized but structurally faithful version of a nonprofit-agency partnership. On one side, The Catalyst Foundation: a VP of Marketing overseeing five program areas — Brand Awareness, Audience Growth, Fundraising, Clinical Research, and Policy. On the other, Groundswell Group: an agency with Strategy, Media Ops, Reporting, and Accounts functions, plus leadership and specialist roles.

Between them: 44 connections, each scored on a four-point alignment scale.

  • Robust — strong strategic alignment with regular collaboration and shared outcomes
  • Functional — working relationship with room for deeper integration
  • Moderate — periodic interaction; alignment tends to be circumstantial rather than intentional
  • Weak — minimal interaction; significant gap in alignment and collaboration

The scale itself was a design choice that shaped what the map could reveal. Most alignment frameworks use binary categories — connected or not, aligned or not. The four-point scale does something different: it makes the quality of connection visible, not just its existence. Two nodes can be connected and still be Weak, which means the system looks healthy from a distance but breaks down under pressure.

Design DecisionThe distinction between the presence of a relationship and its functional quality is the entire analytical point of the four-point scale.

What emerged wasn’t a neat org chart. It was a network with visible bottlenecks, structural silos, and a literal gap between the two organizations where coordination depends on a handful of human relationships rather than any designed process. The Clinical Research and Policy nodes sat at the periphery, barely connected to the agency side. The Accounts Lead sat at the center, mediating nearly every cross-organizational interaction. The pattern was clear before I even conducted the interview: this system’s stability depended on specific people, not on infrastructure.

That sounds like an observation. It’s actually a diagnosis.

04

The Interview

he course’s third lesson — Humanize the System — asked us to select a stakeholder and interview them in depth. I chose the Accounts Lead, the person who sits at the exact nexus of agency-client operations.

Choosing the Accounts Lead wasn’t arbitrary. In the v1 map, she appeared as a node connected to more relationships across the organizational divide than any other single person. If the system had a human load-bearing wall, she was it. I wanted to understand what the system felt like from that position — not what the org chart said her role was, but what she actually did all day to keep the partnership functional.

What I expected to learn was how the system fails. What I actually learned was how it succeeds — and why that success is fragile.

The Accounts Lead experiences the system primarily as a translator. Her day-to-day is defined by mediating between client needs and internal team capacity: fielding emails, scheduling meetings, creating tickets, and reviewing deliverables before they reach the client. She describes this translation work as second nature, powered by what she calls “blind confidence” — a willingness to make quick decisions and course-correct rather than slow things down by seeking consensus.

Human Variable
“Blind confidence” is a human variable that doesn’t appear on any process map. It’s the kind of thing that makes a system work better than its design would predict — and the kind of thing that disappears when one person goes on parental leave, takes a new job, or simply burns out. The system isn’t designed to be resilient. It’s designed to be dependent on someone who happens to be excellent at her job.

But the most revealing finding was what she didn’t mention. Clinical Research and Policy never came up in 45 minutes. They’re either truly disconnected from her operational world or so peripheral they don’t register. The planning handoff works. The reporting loop doesn’t. And her speed and decisiveness — the human variables that lubricate multiple connections simultaneously — aren’t built into any process. They live in her.

If she left the system, several relationships that currently read as Functional would likely downgrade, because the translation layer she provides isn’t organizational infrastructure. It’s one person’s institutional memory.

05

What Changed

The interview produced concrete shifts in the map — not cosmetic adjustments, but structural revelations that changed what the system’s story actually was.

Upgraded: The VP-to-Accounts relationship moved from Functional to Robust — not transactional account management, but genuine co-strategizing. This wasn’t visible in the original map because the formal communication channels (status meetings, email updates) obscured the informal ones (sidebar conversations, shared strategic instincts, a mutual understanding of organizational politics). The Accounts-to-Media Ops connection surfaced as Functional, an informal coordination cadence the original map didn’t capture at all.

Downgraded: Accounts-to-Clinical Research dropped from Functional to Weak. She has almost no visibility into research insights, recruitment data, or trial updates. The VP-to-Research connection also dropped — even leadership doesn’t bridge the research silo. This mattered more than I initially understood: Clinical Research represents the organization’s highest-priority initiative, yet the agency partner building their media strategy has almost no feedback loop into whether the work is actually reaching the right populations.

Resolved: The “Unified Planning” gap chip — representing a breakdown in agency-side coordination — was removed. The collaborative memo approach and improved client receptivity represent real progress on planning alignment. This was one of the few places where the system was performing better than the v1 map predicted.

Validated: “Shared Priorities” emerged as the primary structural gap. Programs don’t share priorities laterally; the agency discovers misalignment after the fact. This is the kind of problem that looks like a communication issue from the outside but is actually an information architecture problem. The system doesn’t have a mechanism for making cross-program priorities visible before campaigns launch.

The v2 map tells a different story than v1. Not a worse story or a better one — a more honest one.

06

The Reframe

The original How Might We question focused on rhythm and silos. After the interview, the question shifted to structural resilience:

How might we build translation and coordination capacity into the operational infrastructure itself, so that coordination endures beyond any single person?

The Accounts Lead’s role as a translator is what makes the current system work. But it’s also the system’s greatest vulnerability. The intervention isn’t about adding connections — it’s about building the translation function into process rather than depending on a person to perform it.

The Shift
The original question implied the problem was about frequency — more meetings, more touchpoints, more check-ins. The reframed question pointed to something structural: the system doesn’t have artifacts that carry strategic intent across organizational boundaries. People carry it. And people are finite.
07

The Design Opportunity

The leverage point sits in the information flow between the agency’s reporting and analytics function and the client’s VP and program teams. The planning handoff works. The media strategy is clear. But when it comes to results tracking, data management, and visibility into delivery and media buying performance, the system breaks down.

This matters because it sits upstream of trust. When the client can see how their investment is performing in real time, they stop asking reactive questions at misaligned moments. The information gap drives the communication gap, which drives the priority misalignment. Fixing the reporting loop doesn’t just improve one connection — it relieves pressure across the entire system.

What I didn’t initially see — and what the systems thinking methodology made visible — is that this isn’t just a reporting problem. It’s a translation problem. Analytics has data. Planning has context. Neither has a mechanism for sharing what the other needs.

The data exists. The context exists. The artifact that connects them doesn’t.

08

The Prototype

he Lesson 4 assignment asked us to prototype an intervention. I built a Campaign Measurement Brief — a single pre-campaign document designed to close the alignment gap between Media Planning and Analytics by giving Analytics the strategic context behind campaign decisions, not just KPIs and flight dates.

The brief has four sections, each targeting a specific structural gap the AlignMap revealed:

  1. Strategic Context — the why behind the campaign, connecting success to the organization’s broader mission
  2. Key Questions for Analytics — strategic questions framed as hypotheses, not reporting requirements
  3. Cross-Program Dependencies — upstream and downstream connections to other program areas
  4. Decision Points & Cadence — a quarterly framework specifying what should be answerable at each stage
Strategic Context
Not just “drive enrollment” but the specific strategic focus, target populations, and how success for this campaign connects to the organization’s broader mission. Analytics can’t build meaningful measurement without understanding what “meaningful” means for this particular effort.

The Key Questions for Analytics section reframes the relationship between teams. Can we compare enrollment performance by audience cohort? Is patient-matched targeting outperforming broad demographic approaches? What’s the typical lag between initial ad exposure and enrollment completion? These aren’t metrics requests. They’re strategic questions that give Analytics permission to do analysis, not just reporting.

Cross-Program Dependencies surfaced more than expected. Explicitly mapping these dependencies exposed a system-level blind spot: reporting each campaign in isolation makes it structurally impossible to see cross-program signals. The artifact I designed to improve a handoff problem turned out to also be a diagnostic tool for system visibility — the leverage point was higher-impact than I initially scoped.

Decision Points & Cadence establishes a quarterly framework (Q1 check-in, Q2 mid-flight, Q3 optimization, Q4 wrap) that specifies what questions should be answerable at each stage and who needs to act on the answers. This transforms reporting from retrospective documentation into a forward-looking decision support system.

The act of writing the brief was as valuable as the brief itself. Sections like “Key Questions for Analytics” and the success framework forced a level of strategic articulation that doesn’t typically happen — not because planners lack the thinking, but because nothing in the current workflow prompts it. The brief made implicit assumptions explicit and created a shared commitment to honest evaluation before anyone has results to defend.

My understanding shifted from seeing this as a handoff problem between two teams to recognizing it as a structural issue embedded in how the agency-client system produces and circulates knowledge. Analytics isn’t under-delivering because they lack skill; they lack context. And Planning isn’t withholding context intentionally — there’s simply no mechanism to transfer it. That reframe matters because it moves the intervention from “fix a workflow” to “redesign how strategic intent travels through the system.”

Planning informs measurement; measurement informs future planning. The brief is where that loop begins.

09

Reflection

Systems thinking isn’t about having the right answer. It’s about having a rigorous relationship with complexity — being willing to map what you think you see, then letting a stakeholder’s lived experience challenge your assumptions.

The most important thing I learned wasn’t about this particular system. It was about how systems feel from the inside. The Accounts Lead doesn’t experience a network map. She experiences the daily texture of translating between two organizations that speak different operational languages. The map makes the structure visible. The interview makes the experience human. You need both.

I’m now curious whether the Campaign Measurement Brief could become a living document rather than a static one — something that evolves at each quarterly checkpoint rather than sitting in a folder after launch. I want to explore whether the cross-program dependency mapping could scale into a system-wide view that clients and internal teams use to make investment decisions, not just measurement plans. And I’m interested in testing this with Analytics directly: does receiving this brief actually change what they build, or does the reporting infrastructure constrain them regardless of how much context they have?

These are the next questions. They’re harder than the ones I started with, and that feels like progress.