Skip to main content
Academia

The Specificity-Scalability Tradeoff

The tension between bespoke output and scalable process is not new — sociologists have been documenting it for over a century. This paper grounds the current gen AI paradox in classical social theory.

Michael Feder · Feb 17, 2026

Introduction

The generative AI paradox that organizations face today — the tension between producing output that is specific enough to be valuable and processes that are scalable enough to be efficient — is not new. It is, in fact, one of the oldest tensions in the sociological study of labor, organizations, and meaning-making.

This paper argues that the current moment in AI adoption can be understood more clearly by grounding it in classical social theory. The thinkers who first articulated the contradictions of industrialization, bureaucratization, and symbolic interaction were describing the same fundamental tradeoff that organizations now encounter when they attempt to scale generative AI.

The specificity-scalability tradeoff is not a technical problem. It is a social one. And the social sciences have been studying it for over a century.

Durkheim: The Division of AI Labor

Emile Durkheim’s distinction between mechanical and organic solidarity provides a foundational frame for understanding how AI adoption fragments organizational coherence.

In a pre-AI organization — or one in the early stages of adoption — solidarity around work processes tends to be mechanical. People do similar things in similar ways. There is a shared understanding of how work gets done, even if that understanding is informal.

As AI is adopted unevenly across an organization, a division of labor emerges. Some teams use AI extensively. Others don’t. Some individuals develop sophisticated workflows. Their colleagues remain unaware. The organization begins to exhibit what Durkheim would recognize as organic solidarity — interdependence through specialization — but without the institutional structures that make organic solidarity functional.

The result is fragmentation disguised as progress.

Marx: Alienation from Output

Marx’s theory of alienation, typically applied to factory labor, finds an unexpected resonance in the generative AI context.

When a knowledge worker uses AI to produce output they don’t fully understand, haven’t fully shaped, and can’t fully evaluate, something structurally similar to alienation occurs. The worker is separated from the product of their labor — not because a factory owner extracted it, but because the production process itself has become opaque.

Organizations that scale AI without attending to this dynamic will find that their people become increasingly disconnected from the output they produce.

This is not a productivity problem. It is a meaning problem. And meaning, as Marx understood, is not a luxury. It is the foundation of engagement, accountability, and quality.

Weber: The Bureaucratization of Intelligence

Max Weber’s analysis of bureaucracy — its efficiency, its rationality, and its tendency to become an "iron cage" — provides the most direct frame for understanding AI governance failures.

Organizations adopt AI and immediately encounter the governance question: who decides how it’s used, what it’s used for, and what standards apply? The instinct, in most organizations, is to bureaucratize. Create a policy. Form a committee. Establish approval workflows.

Weber would recognize this immediately. It is the rational response to uncertainty — routinize the decision, standardize the process, remove individual discretion. And it fails for exactly the reasons Weber predicted: the bureaucratic structure becomes an end in itself, optimizing for compliance rather than value.

The iron cage of AI governance is a set of policies that no one follows because they were designed for a legibility the organization doesn’t have.

Goffman: The Performance of AI Competence

Erving Goffman’s dramaturgical framework — the idea that social life is a series of performances managed for specific audiences — illuminates one of the most underexamined dynamics of AI adoption: the performance of competence.

In most organizations, there is enormous pressure to appear fluent with AI. Leaders perform strategic vision about AI transformation. Middle managers perform operational integration. Individual contributors perform productivity gains. Everyone is performing — and the performances are often disconnected from the underlying reality.

The specificity-scalability tradeoff is, in Goffman’s terms, a staging problem. Scalable AI processes produce front-stage outputs that look competent. But the back-stage reality — the lack of human judgment in the production process — means that the performance of competence is increasingly detached from actual competence.

Burke: The Terministic Screen of AI

Kenneth Burke’s concept of the terministic screen — the idea that the language we use to describe reality simultaneously selects and deflects aspects of that reality — offers a final, crucial lens.

The language of AI adoption is a terministic screen. When organizations talk about "AI transformation," "intelligent automation," "augmented decision-making," they are selecting certain aspects of the reality (efficiency, capability, progress) and deflecting others (alienation, fragmentation, loss of meaning).

Burke would argue that until the terministic screen shifts — until organizations develop language that can name both the gains and the losses of AI scalability — they will be structurally unable to address the tradeoff. You cannot solve a problem you cannot articulate.

Synthesis: The Tradeoff as Social Structure

The specificity-scalability tradeoff is not a parameter to be optimized. It is a social structure to be designed.

Durkheim tells us that the division of AI labor requires integrative institutions. Marx tells us that scaled production requires preserved meaning. Weber tells us that governance must be designed for real behavior, not ideal behavior. Goffman tells us that performance and competence must be realigned. Burke tells us that we need new language before we can have new structures.

These are not abstract theoretical concerns. They are the daily reality of every organization attempting to deploy AI at scale.

Conclusion

The tension between specificity and scalability is the central challenge of the current AI moment. It will not be resolved by better models, better prompts, or better tools. It will be resolved by organizations that understand the social dynamics at play and design their systems accordingly.

The sociologists saw this coming. They described it with different vocabulary, in different eras, about different technologies. But the pattern is the same: when you scale production without scaling meaning, coherence, and governance, the system fractures.

The organizations that get AI right will be the ones that read the sociologists — or at least, the ones that arrive at the same conclusions through their own hard experience.

The Specificity-Scalability Tradeoff | IdleHumans