Dr. Robert Li

The Cycle of Order and Disorder & the Challenge of LLMs

26 Mar 2025

https://raw.githubusercontent.com/dr-robert-li/jekyll-blog/refs/heads/main/images/goya-order-disorder.jpg

Society swings between ordered and disordered states.

We’ve seen this in dramatic shifts from the spiritual communalism of 1970s counterculture to the hard edges of neoliberal capitalism, from awareness of society’s structural inequalities to movements seeking to restore “wholeness” to some ethnic, religious or cultural ideal. These transitions aren’t random—they’re the result of this engine of social upheaval and renewal. For better and for worse.

But today we face something new: large language models are crystallizing these patterns into static forms, potentially disrupting the natural cycle that has driven human progress since the tribal fights between Neanderthals and the earliest Homo Sapiens.

As I have pondered this phenomenon and become more intimately familiar with the construct of Large Language Models (LLMs), I’ve been struck by a profound realization. There is a striking resemblance between the functioning of LLMs and the principles of Actor-Network Theory (ANT), particularly in the process of translation. This connection feels especially poignant, as ANT was the area of study in which I first obtained my doctorate. I’ve come full circle, with the innovation potentially driving the next phase of human socio economic development mirroring the theories I explored in the beginning.

This parallel between ANT and LLMs offers a unique lens through which to examine the current technological revolution in language processing and its potential impacts on society. The way LLMs trace patterns in vast corpora of text and translate this information into coherent outputs bears a remarkable similarity to how ANT describes the formation and transformation of social networks. Unsurprisingly, ANT then provides a powerful framework for understanding not just how LLMs function, but also how they might reshape our society in ways both subtle and profound.

The Dialectic of Societal Thought in Actor-Networks

Society exists in constant flux. Bruno Latour’s Actor-Network Theory shows us that everything in our world exists in shifting networks of relationships.

As stated above, patterns of oscillation between order and disorder appears throughout history. The fall of the Western Roman Empire in 476 CE marked the beginning of what was long called the “Dark Ages”—a period characterized by political fragmentation and frequent invasions. Yet modern historians now view this era not as uniformly bleak but as a time of significant social change and adaptation. The establishment of new kingdoms, rise of Christianity, and innovations in agriculture all testify to the transformative character of this period.

This “darkness” eventually gave way to the Age of Enlightenment (late 17th to early 19th centuries), an intellectual movement valuing knowledge gained through rationalism and empiricism. Enlightenment thinkers like Spinoza, Kant, Hume, and Rousseau explicitly attacked the Middle Ages as a period of social regress dominated by religion. But it was reaction to this period that effected their championing of natural law, liberty, progress, and the separation of church and state. This dramatic swing from faith-centered medieval thought to reason-centered Enlightenment thinking exemplifies society’s pendulum between different organizing principles and how the formation of one network acts as catalyst to the formation of the next.

This wasn’t limited to Western cultures either as similar patterns emerged in China’s history. The chaotic Warring States period (475-221 BCE), with its feudal warlords and constant conflict, eventually consolidated into the ordered Qin dynasty. Centuries later, the highly structured Ming dynasty gave way to the Qing, which initially brought stability but eventually became rigid and unable to adapt to Western influences leading to the convulsions embodied within the communist revolution and takeover of the nation by Mao Zedong.

The 1960s counterculture starting in University campuses in the United States also showed this process clearly. What began as rejection of capitalism by a minority transformed into something different. These hippies unwittingly prepared the path for the modern radical Right that was kicked off by the Tea Party movement as they themselves ascended into positions of wealth and power. This transition evolved through complex interactions between various social actors forming new networks with different power dynamics.

Translation, Reification, and the Sandpile Model

Actor-Network Theory helps us understand these transitions. Social structures emerge through what Callon and Latour called “translation”—the transport with deformation of an idea as it moves through networks. Ideas that circulate widely become solid social structures that seem independent of their creators.

As a quick primer, Actor Network Theory emerged in the 1980s as a paradigm for sociological and technological study, focusing on the relationships between human and non-human actors within networks. ANT’s fundamental premise is to regard both human and non-human entities as equal participants in networks, eliminating hierarchical structures through network analysis. This theoretical framework examines how networks form, stabilize, and transform through processes of tracing and translation.

Tracing in Actor Network Theory

In ANT, tracing refers to the fantastically laborious and methodical following of interconnections within networks as they emerge, compete with other networks, and become durable over time. It involves examining “the mechanics of power as this occurs through the construction and maintenance of networks made up of both human and non-human actors”. Tracing focuses on how technologies propose projects and gather resources to bring these projects to fruition, carefully documenting the dynamics between heterogeneous actors.

Translation in Actor Network Theory

Translation is the process that allows a network to be represented by a single entity. It encompasses “all negotiations, intrigues, calculations, and acts of persuasion” through which an actor takes authority to speak or act on behalf of other actors. Callon identified four moments of translation:

  • Problematization: Defining the nature of the problem and establishing dependency
  • Interessement: “Locking” actors into proposed roles
  • Enrolment: Defining and interrelating the allocated roles
  • Mobilization: The moment when the assembled network or “assemblage” behaves as a coherent actor, behind a representative actor hence the term actor-network.

LLM Pre-training as Actor Network Translation

The pre-training process of LLMs remarkably mirrors this concept of translation, transforming a vast heterogeneous network of textual knowledge into a unified representational model (hence, Large Language Model).

Problematization in LLM Development

The first moment of translation begins with problematization.

In the context of LLMs, this involves defining the fundamental challenge of predicting the next word in a sequence of words based on common language patterns. This task—inferencing—requires a model to understand the statistical relationships between words, phrases, and concepts as they appear in human communication. Developers position the modeling of these language patterns as the “obligatory passage point” through which any solution to natural language understanding must pass.

By framing language understanding as fundamentally a pattern recognition problem, LLM architects establish that any entity wishing to participate in this network must accept this definition of the problem.

Interessement of Data Sources

The second moment involves “locking” actors into proposed roles. In LLM development, this manifests as selecting and committing diverse data sources to contribute to the model’s training.

Data from books, websites, and other sources is pulled into serving the model’s learning process. Pre-training requires the ingestment of vast datasets and fine-tuning through more narrowly defined datasets which need to be reviewed and labelled as such.

Enrolment of Training Components

Enrolment defines how actors interrelate in the network. In LLM pretraining, this involves orchestrating how different data types, algorithms, and computational resources work together.

It requires decisions about data structures (sparse vs. dense representations), data schemas (how to encode hierarchical information), and the selection of specific algorithmic approaches (transformer architectures vs. alternatives).

Computational resources must be requisitioned and allocated—determining whether to use TPUs, GPUs, or specialized hardware, and how to distribute workloads across them.

It needs to be defined how the model learns from masked language prediction or next-token prediction and interactions between the tokenized words and the distances and dimensions are carefully defined through self-attention mechanisms.

Chain of Thought (CoT) reasoning provides a representation of these processes of translation up to enrolment in action. The model generates intermediate reasoning steps before producing a final answer—effectively enrolling different components of its own knowledge representation to collaborate in problem-solving.

By breaking complex reasoning into sequential steps, CoT defines specific roles for different parts of the model’s knowledge: some parts identify relevant facts, others apply logical operations, and still others synthesize conclusions.

This internal division of labor mirrors the concepts of problematization, interessement and enrolment. The model learns not just to produce answers, but to organize its internal representations into a collaborative reasoning process—a microcosm of the broader enrolment happening across the entire LLM development ecosystem.

Mobilization of the model

The final moment of translation, mobilization, ensures that representatives speak for the entire network. The interface through which one might interact with the LLM and the inferenced output becomes the spokesperson for all the linguistic patterns it has absorbed.

While actors may jostle back and forth over roles and primacy or even be substituted, over time and with frequency these connections solidify until such time that one cannot remember a time when they were independent and the assemblage is considered widely as an indivisible whole or “black box”.

Reified representations of all linguistic knowledge

This is reification and it shapes how societies evolve (as we say, we live on the shoulders of giants who themselves live on shoulders of other giants). The thoughts that form actor networks become increasingly embedded until their origins are forgotten.

Reification can take hours in the case of smaller assemblages relying on already densely reified actants like your smartphone or laptop; days or weeks in larger assemblages with more tenuous actants such as volunteer and community groups; and potentially years or even centuries for even larger more diffuse assemblages such as nation states or social norms.

For example democracy was birthed in Ancient Athens around 6th Century BCE but did not become a dominant form of governance until arguably after the fall of the Berlin Wall in late 1989. It is now implemented in various countries without the original struggles that shaped it—creating a simulacrum, a copy without an original. However, those original struggles have meaning and agency themselves. The slow decline of democracy since the 1990s, culminating in and represented by the recently elected 47th President of the United States with his rapid reshaping of the institutions of democracy, could be a result of reification obscuring this actant and all of the connected events.

Political scientist Brian Klaas offers a modern metaphor for understanding the fragility of overly reified social structures: the sandpile model. Borrowed from physics and complexity science, this model illustrates how systems can reach a state of “self-organized criticality.” As Klaas explains, if you add grains of sand to a pile one by one, eventually the pile reaches a critical state where a single additional grain can trigger an avalanche. The system teeters on a precipice—changing small things can trigger the change of everything.

This analogy captures the brittleness of overly reified social constructs. When ideas, practices, or institutions become too rigid through excessive copying and reification, they reach a state of criticality. A seemingly minor perturbation—a single grain of sand—can cause the entire structure to collapse. We’ve engineered modern society to exist perpetually on this edge of chaos, prioritizing optimization and efficiency over resilience.

The Challenge of Large Language Models

What’s unique today is the introduction of language models as non-human actors with unprecedented agency in our communication networks.

Language is the foundation of communication, enabling the exchange of ideas and knowledge, which in turn shapes culture, forms the basis of society, and ultimately supports the development of civilization.

LLMs crystallize language into simplified structures by leveraging similarity distance—often based on mathematical techniques like cosine similarity in vector spaces—to map words, phrases, and concepts in a way that mimics human-like understanding. Instead of needing explicit rules for every possible communication pattern, these models create averages of meaning from vast amounts of text data.

By identifying patterns and relationships between words, LLMs appear to generate coherent responses, translate languages, and even infer context, effectively mirroring the way human communication underpins culture and society—though without true comprehension or lived experience.

They function as processors of human thought, language, and culture. In ANT terms, they are powerful actants in our networks.

Unlike humans, whose thoughts vary based on experiences, LLMs operate as static averages of all opinions they’ve been trained on. This consistency challenges the natural cycle of societal thought. Using Aristotle’s framework, LLMs embody expertise and science without wisdom and prudence.

Reification at Light Speed and the Brittle Simulacra as poor man’s imitation

Typically actant density scales in line with the size or diffusion of the network as we discussed above. LLMs are the first actor-network where the size and diffusion of the network is abstracted from its assemblage density.

It is a compression of, arguably, the current universe of human communication in all its forms and modalities. Think about your own universe of communication - the devices, platforms, methods, spaces and personas - and compressing it all to fit onto a thumb drive. Imagine attempting to navigate and make sense of this repository of all your knowledge. Now multiply that by the billions of people who have also uploaded their lives onto the Internet, into software, platforms and shared on public networks and through messaging protocols.

LLMs short-circuit this process. Traditionally, societal evolution depended on small variations in human thought building momentum over time. When communication filters through these models, we lose the natural variation that drives social evolution. Instead, we get averaged, sanitized ideas. This creates a one-way translation—where deformation of ideas becomes standardized rather than creative.

This accelerated reification creates what Jean Genet might recognize as “simulacra of simulacra”—copies of copies that become increasingly detached from any original meaning. As noted in analyses of Genet’s work, “repeated simulacra operations disrupt the identity bond maintained by simulacra itself, leading to the constant fragmentation and diffusion of the subject’s identity structure and relationships”. Applied to our broader social context, this suggests that as ideas become increasingly reified through LLM-mediated communication, they become brittle and prone to sudden collapse with an effect as vast as the actant’s network is dense. For example, the actant we know of JIT (Just-in-Time) supply chain management could be argued as being responsible for the effect of a single boat getting stuck in the Suez Canal being able to cause $50 billion in global economic damage.

The consequences extend beyond communication. If the actants circulating within networks are increasingly LLM-generated simulacra rather than authentic human expressions, the resulting social structures may become equally brittle and broad in impact and effect.

Necessary Disorder

To maintain societal resilience, we must preserve space for disordered thought within our actor-networks—to “allow some crazy” into the increasingly algorithmically governed world.

This doesn’t mean embracing harmful ideologies but ensuring that human communication retains its variability and capacity for novelty. Human thought processes—with their inconsistencies and creative leaps—have been the combinatory seeds to effect moments of progress throughout history.

When LLMs increasingly stand in for human communication, this variability is smoothed out between layers and layers within neural networks and we risk creating actor networks with less diversity of agency and growing conformity to fewer and fewer, but increasingly dense polarities.

We may find ourselves in a society that appears coherent but lacks mechanisms to grow, violently swinging between these polarities until we cease to move forward altogether, having lost the means to stretch, bend and challenge, to form and reform countering actor networks.

Conclusion

The cycling between ordered and disordered thought has been fundamental to human societies. Through Actor-Network Theory, we understand this as the constant formation and reformation of networks comprising both human and non-human actors.

Large language models represent an unprecedented intervention, crystallizing reified forms of thought into static objects that influence human communication. They embody translation but in a way that standardizes rather than diversifies outcomes.

As we navigate this terrain, we must consider the consequences of allowing LLMs to mediate too much of our communication. The risk isn’t just a less interesting society but one that loses its capacity for growth.

What we need isn’t perfect order or complete disorder, but a conscious cultivation of the tension between them. We need networks that allow for diverse forms of agency and transformation.

It is from this chaotic soup that newness emerges.

Citations: [1] Latour’s Actor Network Theory - Simply Psychology https://www.simplypsychology.org/actor-network-theory.html [2] Thoughts on Actor-Network Theory (ANT) and Entanglement: Latour … https://antonisch.wordpress.com/2019/04/28/260/ [3] Ancient Greek Philosophy https://iep.utm.edu/ancient-greek-philosophy/ [4] [PDF] Actor-network theory https://sidoli.w.waseda.jp/Sismondo_Introduction_STS_8.pdf [5] Chaos (cosmogony) - Wikipedia https://en.wikipedia.org/wiki/Chaos_(cosmogony) [6] Theorizing social change - Zheng - 2022 - Compass Hub - Wiley https://compass.onlinelibrary.wiley.com/doi/10.1111/phc3.12815 [7] ACTOR NETWORK THEORY - Sage Publishing https://us.sagepub.com/sites/default/files/upm-binaries/5222_Ritzer__Entries_beginning_with_A__[1].pdf [8] Actor–network theory - Wikipedia https://en.wikipedia.org/wiki/Actor–network_theory [9] [PDF] On recalling ANT - bruno-latour.fr http://www.bruno-latour.fr/sites/default/files/P-77-RECALLING-ANT-GBpdf.pdf [10] Aristotle - Wikipedia https://en.wikipedia.org/wiki/Aristotle [11] [PDF] Actor-network theory-the market test - openscienceASAP http://www.openscienceasap.org/wp-content/uploads/2013/10/Callon_1999.pdf [12] Ancient philosophers on mental illness - Marke Ahonen, 2019 https://journals.sagepub.com/doi/full/10.1177/0957154X18803508 [13] Jean-Jacques Rousseau - Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/rousseau/ [14] The Open Society and Its Enemies - Wikipedia https://en.wikipedia.org/wiki/The_Open_Society_and_Its_Enemies [15] A philosopher’s guide to messy transformations - Strategy+business https://www.strategy-business.com/article/A-philosophers-guide-to-messy-transformations [16] How has philosophy impacted society? : r/askphilosophy - Reddit https://www.reddit.com/r/askphilosophy/comments/zpsb0g/how_has_philosophy_impacted_society/