The Specific Claim Being Made
McKinsey's recently circulated AI leadership playbook advises firms to flatten management layers and accelerate decision velocity by deploying AI to absorb the coordination work that middle managers have historically performed. The consulting logic is straightforward: if AI can aggregate information, synthesize reports, and route decisions upward, then the human infrastructure built around those tasks becomes redundant. Several outlets are now calling this "The Great Flattening." The prescription sounds efficient. The underlying assumption is where the problem lives.
What the Org Chart Actually Encodes
Classical organizational theory treats hierarchy as a solution to bounded rationality. Managers exist, in Simon's (1947) formulation, because no single actor can process all relevant information simultaneously. The layer-by-layer structure of a traditional firm is not bureaucratic inertia; it is a series of nested competence assumptions. Each managerial tier is expected to arrive pre-equipped with domain knowledge, interpretive judgment, and the ability to translate organizational directives into executable tasks. The hierarchy works because it presupposes ex-ante competence at every node.
The McKinsey playbook implicitly accepts this presupposition and then proposes to replace the node rather than interrogate the assumption. The premise is that AI can perform the information-routing function that middle managers perform, therefore middle managers are removable. But this conflates coordination with competence. Middle managers do not merely route information; they interpret ambiguous signals, absorb contextual variance, and produce actionable schemas for the workers beneath them. Kellogg, Valentine, and Christin (2020) document precisely this dynamic in algorithmically-managed workplaces: when human intermediaries are removed and algorithmic systems take over coordination, workers face a structural literacy problem that the algorithm itself cannot resolve. The system coordinates; it does not teach.
The Flattening Creates a Competence Vacuum, Not a Competence Transfer
This is where the McKinsey prescription runs into what I would call the competence vacuum problem. Flattening an organization by removing middle management does not redistribute the interpretive work those managers performed. It leaves that work unassigned. The employees who previously operated within a competence-rich environment - where a manager could translate strategic ambiguity into tactical clarity - are now expected to perform that translation themselves, using AI tools whose internal logic they may not understand.
Hatano and Inagaki (1986) draw a useful distinction here between routine expertise and adaptive expertise. Routine expertise is the capacity to execute well-defined procedures efficiently. Adaptive expertise is the capacity to understand why those procedures work and to modify them when conditions change. Middle managers, at their best, provide adaptive expertise to the teams beneath them. An AI coordination layer, as currently implemented, provides procedural routing. It tells workers what to do next; it does not help them understand the structural logic of why that sequence matters. When the environment shifts - and in a flattened, AI-accelerated firm, it will shift faster, not slower - workers without adaptive expertise will fail at exactly the moment speed is most critical.
The Yale Economist's Argument Makes This Worse, Not Better
A newly released NBER paper by Yale economist Pascual Restrepo adds a complicating data point to this conversation. Restrepo argues that AGI will not automate most jobs because most jobs are not economically worth automating given current cost structures. The implication is that firms will selectively automate coordination tasks that are expensive and well-defined, while leaving ambiguous, low-margin cognitive work to humans. If Restrepo is correct, then the workers who remain after flattening will be precisely the workers handling the most structurally ambiguous tasks - the ones that require the adaptive expertise that neither AI nor reduced management layers can supply. The organizational residue of The Great Flattening is a workforce assigned to complexity without the interpretive infrastructure to manage it.
What This Means for Governance, Not Just Operations
The governance implication is underappreciated in the current coverage. Boards approving workforce restructuring based on McKinsey's AI playbook are accepting an implicit claim: that the coordination value previously housed in human management layers can be extracted and replicated algorithmically without loss. This claim has not been empirically tested at scale. Rahman (2021) shows that algorithmic control systems create what he calls invisible cages - workers are constrained by systems whose logic they cannot see, appeal, or modify. Flattening an org chart while introducing AI coordination does not liberate workers from this dynamic; it intensifies it by removing the human intermediaries who might otherwise translate, buffer, or contest algorithmic directives.
The more precise governance question for any board evaluating this playbook is not "can AI perform coordination functions?" The answer to that is probably yes for well-structured tasks. The sharper question is: "What happens to the interpretive and adaptive competence that was distributed across the management layers we are removing, and where does it go?" McKinsey's playbook, as reported, does not answer that question. That absence is itself diagnostic.
References
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt