Antipatterns of Large Organizations

This is a brief discussion of three patterns that limit the success of large organizations: the McNamara fallacy, the Peter principle, and the Preventable Problem Paradox. In organizations larger than a few dozen people[0], instances of these concepts appear naturally as the consequence of common incentive structures. Care must be taken to guard against them when designing or leading such an organization

McNamara Fallacy

Also known as the quantitative fallacy, this is the inclination to value things based on how easily they can be measured. Daniel Yankelovich introduced this term in a 1971 conference talk:

The first step is to measure whatever can be easily measured. This is okay as far as it goes. The second step is to disregard that which can’t be easily measured or give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t very important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

Of course this is not an indictment of quantitative methods generally, but rather a warning against a form of quantitative orthodoxy that is unable to recognize value without an attached numerical measure.

In practice this often looks like the refusal of an organization to acknowledge value without measurement. For example, experienced team members may suggest ideas or raise concerns based on experience or intuition borne thereof.

There are also perverse second-order effects of this way of thinking. An organization that only recognizes quantitative, easily measurable impact will limit the ways its teams work. Before any project is begun, a question must be answered: ‘How will we measure the impact of this project’s success?’. Initiatives whose outcome (however positive) does not lend itself to measurement will be deemed de facto low-ROI and deprioritized.

Projects well-suited to quantitative analysis are impacted as well. Because measurements are necessary to justify work in the eyes of the organization, the cost of measuring both opportunity and impact are effectively rolled into the cost of project delivery (customers, of course, are only impacted by the delivery itself and couldn’t care less about measurements). A project may be deemed low-ROI because its associated measurement is costly, regardless of the delivery cost!

To reiterate, there is nothing wrong with quantitative methods as a general approach; this is a discussion of application at the margin rather than the center. But without intentional effort it is natural for that margin to be ceded entirely to the quantitative side. Expert judgment[1] is one way to allow an organization to recognize work that is valuable but not easily measurable. Another is to reify hard-to-measure objectives via performance reviews or other processes in order to incentivize related work[2]. This is a blunt instrument as described, but keep in mind the alternative is that certain types of projects will simply never happen.

Peter Principle

The Peter Principle is the idea that employees are promoted to their level of incompetence. If promotion is based on strong performance in a current role, employees will continually receive promotions until they reach a role where their performance decreases.

Solutions to the immediate issue are much more apparent than with McNamara and as a result most of the difficulty lies in identifying and compensating for second-order effects. Wikipedia identifies solutions from a few sources, including an ‘up or out’ culture where poor-performing employees are more easily fireable. Google uses a different approach where employees are generally expected to demonstrate consistent performance relative to the level above them before they are promoted.

In these and other cases, the attitudes and behavior of employees (as always) will be greatly affected by the specific incentives around promotion and performance. The particular levers available to an organization seem to be ease of firing/demoting vs. employee loyalty/security, propensity for promoting internally vs. filling senior positions externally, and coupling vs. decoupling of promotion and compensation.

Preventable Problem Paradox

Although similar ideas exist under various names, this phrasing and its definition are attributed to Shreyas Doshi:

Any complex organization will over time tend to incentivize problem creation more than problem prevention.

This is related to McNamara: it is easy to measure (and therefore recognize) the impact of fixing visible, urgent problems. Likewise it is difficult to measure (or even assert the existence of) the impact of preventing such problems despite the fact that prevention is often both higher value and lower cost.

Once again, the organizational design issue is the incentive structure. Given that the default environment does not allow for problem prevention, where is improvement possible? Doshi suggests pre-mortems as a concrete step, along with simply socializing the idea of the paradox itself.

In a product organization, this issue is likely concentrated in areas like security and reliability. Recognizing these areas and explicitly incorporating prevention and risk analysis into their processes is a good step (though beware of a second-order effect where externalities of these areas hinder the larger organization without check). To determine the importance of preventative work it may be useful to explicitly identify the organization’s appetite for risk as the cost of speed. Prevention-oriented navel-gazing, while seemingly insidious, is in fact easily avoidable with minor oversight.


[0] This is around the point where it is no longer feasible for each person’s work to directly affect the success of the organization as a whole. Without this basic incentive alignment in place, abstractions like performance reviews and promotion criteria (and their embedded values) start to drive the organization’s success.

[1] As we depart from purely quantitative methods, skeptics will be quick to notice the opportunity for favoritism and other typical human biases to creep in. While it would be foolish to assume that more apparently objective methods shield an organization from such issues, I personally believe that there is no substitute for relying on the capability of talented team members. Organizations that will not do this (due to McNamara) or cannot do this (because they lack the talent) will be limited in their execution beyond a certain point.