I made a little prioritization app. Give it a try! Desktop/laptop only for now.
Warning ⚠️: This post is for people who delight in organizational language and meaning, and stress about it. It’s for you if you’ve ever debated the merits of a “bet” over an initiative, tried valiantly to clarify concepts at your company (a tall order), care deeply about interfaces and domains, or worry about overwhelming teams with zombie processes. You have a super-radar for incoherence. You’ve been known to use the words Taxonomy or Ontology or Semantic Drift. ⚠️If this isn’t your thing, I want to stay on your good side, so maybe skip this one.
After my last post on relationships between objects, I started to feel like the objects (nouns, or nodes) needed some love.
I’m fresh off a week of awesome workshops and endless explorations of words in organizations. “Should we call them bets or initiatives, or both?” “What is a program, exactly?” “What is the job of an initiative?” “A roadmap?” “At what point does an OKR represent stable drivers?” “What is a capability, exactly?”
I’ve been swinging back and forth between “words are very important!” and “maybe I’m missing something?” and “maybe it really doesn’t matter.”
It always feels like a juggling act:
At any meaningful scale, there will be a contingent of well-meaning people eager to turn abstract (yet helpful) ideas into concrete (and less useful) ideas.
Even with a lot of local variation, you’ll likely need some minimally viable consistency across the org for the teams to interface with each other, and with the enterprise strategy and coffers. That would seem to warrant some scrutiny and intentionality to get it right. Unfortunately, these are often the concepts where rigor gives way to optics.
Words (and related concepts) can sometimes be a Trojan horse. They have a way of creating an unlock. So there’s hope!
But you can’t rename your way out of a structural problem. If interactions, behaviors, and practices don’t change, the words won’t either.
In fact, semantic ambiguity, power, and influence are often related. If you can keep people guessing, then you can control the narrative.
There’s a constant tension between “calling things like they are” and then playing with a new, as-of-yet-untested future.
I discuss this in the piece, but earlier in my career, I was so naive. I believed you could just visualize reality, talk about it, form working agreements, and live happily ever after in a perpetual state of continuous improvement. I was wrong.
Phil Karlton is famously quoted as saying:
There are only two hard things in Computer Science: cache invalidation and naming things.
In organizational contexts and sociotechnical systems, we might reframe this as:
There are only two hard things in Organizational Semantics: semantic drift and semantic closure (and reification).
Put differently…
Each standard and each category valorizes some point of view and silences another.
Geoffrey C. Bowker & Susan Leigh Star
And companies are always running in degraded mode, glossing over local reality to “manage” things.
Designed or planned social order is necessarily schematic; it always ignores essential features of any real, functioning social order.
James C. Scott, Seeing Like a State
This is tough, conceptual stuff! But I do believe the thinking is worth the journey.
I’m going to start this post, as I do many posts, with a thought experiment.
You walk into a company that uses “initiatives” and ask eight people what an initiative is.
The replies:
“A container for aligning multiple teams around a shared outcome.”
“A focused investment of capacity (skill, time, focus, etc.) with a desired outcome.”
“It’s another word for a project, right?”
“A feature, a launch… something bigger than tickets and epics.”
“A hypothesis about how to change a system, tested through coordinated action.”
“Something that takes more than one or two weeks, but less than a quarter.”
“Leaders don’t want to look at epics or stories, so initiatives are basically a way to describe all the stuff we’re doing in neat buckets. A lot of work doesn’t fit, either, and that is OK.”
“It’s what we use to categorize work for capitalization.”
What do you observe? Are these different interpretations an actual problem? Let’s look at some dynamics:
Same thing, different zoom levels.
Some people are talking about the same underlying thing, just at different levels of abstraction. The initiative is a focused investment of capacity at a higher level of abstraction, and a feature or launch at a lower level of abstraction. One is zoomed out. One is zoomed in.
The same word, doing different jobs.
In other cases, the word is doing different work for different people. Finance uses the word to categorize work for capitalization. The chief of staff uses the word to summarize work for leadership (and avoid showing epics and stories). Program management uses it for aligning teams. A product director uses it to help their reports clarify outcomes and assumptions. This is polysemy. The same word can carry multiple meanings, depending on who uses it and why.
What’s worth noting is that humans are actually quite good at juggling these moving targets. People routinely shift between contexts, infer which meaning is in play, and translate on the fly. In different contexts, initiative means slightly different things, and most of the time, that’s fine.
The failure mode is believing there is one domain when there are several, which brings us to…
When those meanings can’t all be true at once.
Finally, we have ontological conflict.
These are cases where two definitions can’t both be true at the same time. In a sense, it is irreconcilable or contested polysemy. If an initiative is “a hypothesis about how to change a system” and “another word for a project,” we have a problem. Hypotheses are meant to be changed or invalidated. Projects assume a known path and a finish line.
Ontological conflict occurs when concepts from different bounded contexts are treated as if they were the same object. If strategy, finance, and delivery all claim their definitions are the only right ones, or fail to acknowledge the impact of their competing definitions, this is classic model collision. 2+ domains are trying to share a noun without a translation layer.
Because people will say there’s nothing actionable in this post, I just wanted to share two actionable tactics for embracing polysemy and navigating levels of abstraction.
Product work constantly requires zooming in and out of levels of abstraction. We’re used to the idea that the same tools, the same words, and the same artifacts can serve different jobs for different people. Humans are remarkably good at shifting context, inferring meaning, and translating on the fly.
But lest we think ontological conflict is theoretical or harmless, ask yourself how key strategic decisions, funding decisions, and organizational design decisions are actually made in your company. Consider where different models of work collide, and what happens when incentives, reporting, and authority are locked to particular words or definitions.
That’s where ontological conflict stops being philosophical and begins to have very concrete implications.
It goes without saying that we deal with a wide spectrum of concepts in product development, ranging from the extremely concrete to the highly abstract.
A release happens. Code is merged. Builds are created. Feature flags flipped. Logs change. Alerts fire (or don’t). Something that wasn’t true in the system a moment ago is now true. You can still argue about whether it was successful or well executed, but you can’t argue about whether it occurred.
Meanwhile, labels like initiative, quality, bet, capability, opportunity, problem, value stream, outcome, and epic are more conceptual. This doesn’t make them any less helpful, but it does mean their boundaries are fuzzier. The boundary edges are negotiated, and their meaning depends far more on context. They help us think, align, and decide, but they’re also easier to reify, argue over, and mistake for more concrete things.
(Note: A quick clarification. None of this is meant to privilege the concrete over the abstract. Both are powerful and necessary. Concrete events like releases anchor us in what actually happened. Abstract concepts help us reason, align, and decide in the face of complexity.)
I’m reminded of the paradox of the heap (Sorites paradox). If you start with a heap of sand and start removing a grain at a time, at what point does the heap of sand stop being a heap of sand?
Some product development versions:
At what point does a capability become a feature?
A new feature becomes an old feature?
An output becomes an outcome?
An input becomes an output?
At what point does a story become an epic?
At what point is a customer journey more like a collection of networked journeys?
How opportunity-like does a problem need to be to be considered an opportunity?
Exactly when are you shifting from discovery to delivery?
How many teams must be involved for something to be “cross-team”?
When does a platform capability become a product?
When does a strategy become too tactical to be called a strategy?
When does an initiative “stop” (assuming the impacts are still accruing)?
The “right” (hopeful) answer is that it probably doesn’t matter, as long as you’re doing impactful work. You’ll know it when you see it. But what if someone is making well-meaning but ultimately harmful decisions elsewhere in the organization because they think the boundaries are clearer than they really are, and that their interpretation is the only correct interpretation?
Which brings us to reification—one of the primary drivers of ontological conflict in organizations.
(Reify: To treat an abstract concept, model, or idea as if it were a real, concrete thing)
It’s all bets, opportunities, and BAU until someone loses their job.
A classic example is “BAU” vs. “strategic initiative.” The line seems arbitrary. How “usual” does it need to be to be BAU? How strategic to be strategic? But when someone in finance treats “strategic initiatives” as the only real value drivers and disregards the value of BAU work, we have a problem.
No joke, this behavior and math have led to layoffs in some organizations
There are very real implications here!
Why does it happen?
The best description of this risk I’ve found of reification in action is in James C. Scott’s Seeing Like a State. In the book, Scott describes how organizations try to simplify complex, lived, and emergent realities to make them legible, comparable, and governable from a distance. These simplifications aren’t malicious and, in many cases, are necessary. The problems arise when the model designed to support administration and control is mistaken for reality itself.
That’s exactly what’s happening here. Labels like initiative, strategic, or BAU start as useful abstractions, created to help with funding, reporting, or coordination. But over time, they harden and are used to regulate product development in ways that are fundamentally incompatible with learning-heavy, adaptive work.
If you own your home, you’re probably pretty happy that contracts exist, that property lines are defined, that zoning laws apply, that building codes are enforced, and that there’s a legal system backing all of that up. Those rules make ownership possible. They reduce risk. They create stability. Rules aren’t the problem.
The problem is when rules designed for administration and protection are mistaken for a complete description of reality, and then used to override local knowledge, lived context, and good judgment.
Examples:
The business case thrown together to pass gate X becomes a real thing, an accounting object, instead of a collection of assumptions to test.
People start believing story points are anything but throwaway shaping and scoping mechanisms.
A quick side story. I worked with a VP once who couldn’t care less about any of this. The finance machinations, lossy status views, and categorization schemes were just annoyances to live with and essentially work around. They played along and checked all the boxes, but behind the scenes they were always telling their teams to focus on impact and to basically sleepwalk through every company process.
And at the end of the day, impact will talk, and people will have more confidence in product-led approaches.
I’ve been thinking about this stance a lot lately. In many ways, it is not a bad strategy. It is pragmatic, while not being openly subversive. In many ways, they were right. The only way to shift the finance team’s stance was to show them what real outcomes look like. Until then, it was all theoretical.
It also represents a common view of corporate governance—that, almost by definition, it is disconnected and reified, and that the sooner you learn that and work around it, the better. Instead of trying to fix the reification through transparency and shared understanding, literally try to “beat” it with a better story.
So what if someone in finance wants to see a list of initiatives and hours? We’ll just fill in the forms, placate them, and work our damndest to upend that model.
They weren’t fighting the system. They were refusing to let it define reality. It was also a perspective that a new reality would emerge from things that happened, not from the words. As Latour reminds us, institutions only become real when practices hold together long enough to make them so.
An institution is not a thing, but a set of practices that has become durable.
I know a successful technology leader, author, and not a top-notch consultant who swears small word shifts and model hacks can nudge an organization to better results. Quickly. He has the receipts and proof.
Here’s his general approach:
He has legitimacy, having been a leader in a complex organization. Whenever someone says, “but can it work in [insert highly regulated domain] he has a reasonable, lived-experience answer that puts fears at rest.
He operates at the leadership level, typically with people who have bought into his message to some degree.
He introduces a small set of “governing” ideas that are, in fact, very generative and enabling. The model checks the boxes and feels simple, yet it is also a “Trojan horse” for more subversive ideas.
What surprised me (and him) is that it is a lot easier than one might think to “hack” a company with the right framework. But under the right conditions. It is highly unlikely someone on the front lines could suggest these ideas and get any traction. You have to “talk the talk” of risk and real-world accounting and finance implications. And you have to apply leverage at just the right spot, with just the right hack.
These examples describe two different ways leaders respond to reified models of product work, both grounded in real examples.
The pragmatic VP worked around finance machinations and lossy status views, complying just enough to protect teams while focusing on impact.
The Trojan Horse Expert took the opposite approach, hacking the governance layer directly and introducing seemingly lightweight, compliant-looking frameworks that successfully reshaped behavior on multiple fronts.
Both approaches accept that abstraction and reification are facts of organizational life. The difference is whether you outgrow the model through practice or rewire it through carefully designed ideas and power dynamics/identity hacking—or both.
I put both ideas in contrast to the naive old me of twelve years ago:
We’ll put the messy reality of how things actually are up on the board!
We’ll grapple with what things really mean together!
We’ll start with things as they are right now!
You just need to adopt a culture of continuous improvement!
Oh, how deluded and optimistically naive I was!
Alicia Juarrero’s work on constraints offers a useful lens here. She argues that coherence does not come from forceful causes or fixed definitions, but from enabling constraints that shape how systems evolve. These constraints create the conditions for action and learning. As patterns of interaction stabilize, they become constitutive constraints that allow an identity to hold together. Over time, some of these harden into governing constraints that regulate behavior at scale.
Constraints shape behavior
Repeated behavior stabilizes into patterns, and
Those patterns are what we later recognize as coherence.
This helps explain why attempts to regulate product development through rigid categories and lifecycle models so often backfire. When governing constraints are treated as if they define the work itself, they crowd out the enabling and constitutive constraints that make good product work possible, and what should remain adaptive becomes frozen.
Viewed this way, the pragmatic VP is hacking the system by protecting enabling and constitutive constraints at the level of practice. They allow teams to learn, adapt, and coordinate, even if that means treating governance constraints such as reporting and categorization as ceremonial. The VP bets that if good work patterns are allowed to stabilize, a new reality will emerge organically, and governing constraints will eventually adjust in response.
The Trojan Horse Expert is hacking the system at a different layer. Rather than working around governing constraints, they intervene directly in them. By introducing small, legitimate-looking models, they reshape governing constraints, making them more enabling than suppressive. When this works, it changes not just local practice, but how the organization allocates attention, funding, and authority.
Naive John did neither of these. In hindsight, I wasn’t hacking practice or hacking models. I was asking the system to reason its way out of a problem it had been designed (implicitly and explicitly) to create.
Also operative here is the idea of objects not just as labels, but as containers for behavior and interaction, and as enabling constraints.
I’m a big fan of participating in event storming, though I can’t claim any deep domain-driven design expertise. What I love about event storming is that it shifts the focus to what is actually happening, rather than getting stuck in noun-based debates teams so often fall into (for example, “Is this a ___ or a ___?”).
Meaning emerges through understanding interaction. Applied to polysemy and our beloved initiative, this suggests a different move. Instead of endlessly arguing about what an initiative is, we can view the domains in which initiatives operate as containers of coherent behavior, and examine the interfaces between those domains. Where do handoffs work? Where do assumptions leak? Where do definitions drift into ontological conflict?
Example:
Insight discovered → Discovery
Bet approved → Investment
Work started → Team
Work shipped → Release
Value observed → Outcomes
Direction changed → Governance
Where is the risk? Example:
This doesn’t eliminate ambiguity, but it gives teams something far more actionable than a “correct” definition. Instead of debating what an initiative is, we can observe events, group them by domain, and study how behavior coheres within and breaks between those domains.
You can think of this as several lenses we can deliberately move between and interrogate.
[Object] is a container for [interactions, behaviors, decisions, or conversations].
Examples: decision-making, coordination across teams, discovery work, sequencing delivery, negotiating trade-offs.
This lens asks: What actually happens inside this thing? Who interacts? Why?
Tip: If people struggle to name concrete interactions, the object may already be over-abstracted.
[Object] is an enabling constraint meant to drive coherence around [what, specifically].
Examples: a shared outcome, a learning goal, a time-box, a risk boundary, a strategic intent.
This lens asks: What does this object intentionally constrain or enable? What kind of coherence is it meant to produce?
Tip: If different groups name different kinds of coherence (learning vs. predictability vs. accounting), you’ve found polysemy.
We count, track, or report on [Object] to [make what decision, enforce what rule, or allocate what resource].
Examples: funding decisions, prioritization, capitalization, performance evaluation, executive reporting.
This lens asks: Why does the organization care about this object? What incentives or controls are attached to it?
Tip: This is where governing constraints tend to hide.
We can understand [Object] in the context of its relationships to [teams, goals, funding, releases, customers, risks, other objects].
This lens asks: What does this object depend on, influence, or connect to? How does its meaning change based on what it touches?
Tip: Ontological conflict often shows up at these boundaries, not inside the object itself.
When different people give incompatible answers to these prompts for the same object, the issue usually isn’t misunderstanding. It’s that the object is carrying different assumptions, constraints, and responsibilities depending on who is looking at it.
Juarrero would see these lenses as linked in a feedback loop rather than as independent perspectives. Interaction gives rise to constraints. Constraints stabilize patterns of behavior. Stabilized patterns become governing forces that then shape future interaction. The relational lens shows where these loops reinforce coherence and where they break down across domains.
As mentioned above, a great thing about event storming is that it makes these loops visible without forcing agreement on definitions. By starting with events, it grounds the conversation in interaction, then naturally exposes the constraints, governance mechanisms, and relationships that shape those interactions over time.
At Dotwork, we joke about falling into the “Noun Farming Trap.
We do a lot of noun (and verb, relationship) farming—basically deep-diving on artifacts and extracting the nouns, and as we discussed in last week’s post, how the nouns relate. This fixation makes sense because we are software. Software must compile. Even flexible, graph-native software must compile.
If you’re working in a Google Doc, you can list thirty objects and weave together a fluid story without worrying much about what things really are. If the story works, the doc works. But in a software tool, something needs to decide what to show in the left nav. Something needs to define what pops up when you click a row. To define a relationship, you need two objects to connect. We also have to define “spaces” where teams can develop their own language and figure out how to map that language (and potential governance, etc.) to each other. Boundaries matter.
We can’t escape the desire to govern and make things legible. Still, we can hopefully nudge customers towards effective polysemy and abstraction-level spanning, and towards establishing healthy interfaces between domains. We can enable healthy legibility (hopefully).
The noun farming trap is when you get carried away with this exercise and forget about rituals and interactions.
But as Juarrero explains in Context Changes Everything:
Identity is not conferred by intrinsic properties, but emerges from patterns of interdependence that hold over time.
You admire the concept map, while forgetting what’s happening.
Through Juarrero’s lens, the Sorites paradox isn’t a paradox at all. It only appears paradoxical if we assume that dynamic patterns have crisp, object-like boundaries. A heap is not a thing with an essence. It is a pattern sustained by constraints. As those constraints weaken, coherence fades. There is no single grain where “heapness” disappears, only a gradual loss of stability. The same logic applies to many of the categories we use in product development.
At the risk of getting slightly nerdy, this way of thinking should feel familiar to anyone who has worked with event-based or event-sourced systems. In those systems, the “state” of a thing isn’t treated as a static object with intrinsic properties. It’s an emergent summary of what has happened over time. The thing is what it has done, constrained by the system’s rules.
Seen this way, noun farming is a bit like freezing the current state of an event stream and mistaking that snapshot for the thing itself. In many ways, DDD’s obsession with events and domains is a rebellion against reification.
If anyone has gotten this far, I’m very grateful. This is me thinking/exploring in public, and I do this fully aware that it can be dense and rambling. Here are a couple of actionable ways put the ideas in this post into motion.
Shift focus toward things that actually happen and are easier to reason about across levels of abstraction. Milestones happen. Pull requests happen. Releases happen. You can debate what an initiative or a bet really is, but it’s hard to argue about whether progress was made. This isn’t to say that opportunities, bets, or initiatives aren’t valuable containers of activity. They are. But when deciding what to elevate and coordinate across the organization, it often helps to anchor on more concrete signals of movement. I recently met a team that treats milestones as the primary “API” across the org.
Milestones can be attached to many different objects, but by centering on something that happens, they reduce semantic drift and make progress legible without forcing agreement on higher-level abstractions.
One way to reduce confusion caused by polysemy is to agree on a thin base definition that captures the essence of a concept, for example, “An initiative is a bounded, intentional effort aimed at producing a meaningful change or outcome,” and then allow specialized forms under that definition, such as hypothesis-driven experiments, cross-team delivery efforts, or time-boxed pushes toward a goal.
Engineers will recognize this immediately as the same move as defining an abstract base class with multiple concrete implementations.
This approach works when the base definition stays lightweight, and the differences between forms are explicit and respected. It can also help avoid ontological conflict, as long as you design for the fact that different shapes of initiatives carry different rules around success, reversibility, governance, and accounting. Trouble starts when those differences are flattened, and everything is forced into the same lifecycle, reporting expectations, or funding model.
Related to polymorphism but solving a different problem is the decision to keep certain core definitions intentionally plain and under-specified. Where polymorphism introduces variation by kind, this tactic limits the amount of meaning any single object can carry on its own.
For example, describing an initiative as “a focused investment of capacity” leaves open what that investment is focused on. The initiative can then be linked to outcomes, opportunities, risks, or value hypotheses without collapsing all of that meaning into the noun itself.
If, instead, you define an initiative as “a value delivery mechanism,” you lose that flexibility and hard-code assumptions about purpose and success prematurely.
This approach works because it treats objects as useful containers for coordination, while allowing meaning to emerge through relationships rather than definitions. It’s less about creating specialized types and more about resisting the urge to make any one word do too much explanatory work.
Another way to reduce confusion is to treat certain concepts as fractal.
The organization agrees on what a bet is, but accepts that bets can exist at many levels of abstraction. The core patterns and behaviors are similar, such as insight, appetite, options, and shaping, but they’re performed at different scales. Leaders shape larger, strategic bets, while teams shape smaller bets within their domains.
This tactic differs from polymorphism, in which variation occurs by kind.
In a fractal model, variation occurs at different scales. A team-level bet isn’t a different type of thing than a leadership-level bet; it’s the same kind of thing operating under various constraints.
This approach works when it’s clear which level a bet belongs to, who has the authority to shape it, and how decisions at one level constrain or inform those at another. The risk appears when bets at different levels are collapsed together, treated as roll-ups, or governed as if they all live at the same altitude.
Sometimes the problem isn’t that a word is doing too much work, but that it’s failing in very specific cases. Take something like initiative. If you use it for everything, you may start to notice that it breaks down when you’re talking about highly dependent, cross-organizational efforts.
That’s often a signal to name the exception.
You might introduce something like the Big Messy Initiative or the Cross-Org Effort to call out the difference explicitly.
The point isn’t to create a permanent new category or to nest everything under it. It’s to surface the fact that this case behaves differently and deserves different attention. Naming the exception draws focus to the risk, coordination, and trade-offs involved, without forcing the rest of the system to bend around an edge case that was never representative in the first place.
An extension of fractal abstraction is to anchor work around tangible artifacts or rituals rather than purely abstract objects. Some companies do this extremely well. Amazon’s PRFAQ and six-pager are good examples. These artifacts can be applied fractally to many problems at many levels of the organization. A single PRFAQ might represent a large strategic bet, while others operate at the team or feature level. They often spin off multiple workstreams, experiments, or follow-on ideas, but the artifact itself provides a shared focal point.
What’s important is that these aren’t procedural documents no one reads. They’re ingrained in the culture and repeatedly used to shape thinking, discussion, and decision-making. The same pattern shows up in other companies, such as regular customer problem reviews, written decision memos, and structured retrospectives that scale from teams to leadership. The power here isn’t the artifact or ritual itself.
It’s the same pattern that is reused across levels, providing coherence without forcing everything into a single abstract category.
The goal isn’t to find the right frame. It’s to make the system robust to many frames.
One of the benefits of keeping core objects utilitarian is that you can accept, rather than fight, the fact that work will be viewed through many different frames for many other purposes.
An initiative defined as a focused investment of capacity doesn’t have to “mean” anything in particular. Because it’s grounded in something concrete like time, attention, or spend, you can slice and reinterpret that investment across different models without changing the underlying object.
The same initiative can be understood through a customer journey lens, a product taxonomy, a growth stage, a metrics tree, or a goal hierarchy. This embraces polysemy instead of trying to eliminate it. Rather than arguing over the correct categorization, the organization can ask different questions of the same underlying reality, depending on the decision at hand.