In 1989, we started using the word ‘network’ more than ‘machine’

1989 was a momentous year – protests in Tiananmen Square, the falling of the Berlin Wall, ‘the end of history’, no less. All significant events, but I offer a late entrant as candidate for most important event of 1989: it was the year we started using the word network more than the word machine.

1989 wasn’t quite the ‘end of history’, but it may, very beneficially, have been the end of the Machine Age of Human Cognition (c. 1637-1989). Slowly, we are comprehending that the anchor metaphor of Western culture has changed – because we changed it.

This change in world metaphor heralds a rebalancing in our scientific investigation of the world – from a reductionist-dominated enquiry to something that blends reductionism and systemism.  Critically, this potential change in shared cognition has the power to unlock some of our most persistent social and ecological problems, if our culture can grasp its significance quickly enough…

Download as PDF

Introduction

Certain years pack more history into their equal allotment of days than others. 1776, 1789, 1848, 1968, to name a few, all stand out from their less remarkable neighbours. 1989 was such a year. Eastern Europe was a particularly rich source of events: the Berlin Wall fell, Solidarity won elections in Poland, Vaclav Havel became President in Czechoslovakia, and Ceasescu came to a grisly end in Romania. Under Mikhail Gorbachev’s leadership, the Soviet Union held its first free elections for its Congress of Deputies. 

Revolutionary winds also blew strongly in China, though ultimately blew out. In the most memorable image of the year, an unidentified protester in Tiananmen Square stood alone but resolute in front of 4 tanks, his incongruous carrying of two shopping bags epitomising a defiance from ordinary people.

Time magazine would declare it the “year that changed the world”, Francis Fukuyama that it was “the end of history.”[1]

Given all this, it may be reckless to squeeze yet another significant event into a year already so generously endowed, and downright brash to suggest it might yet prove the most important development of 1989, but I will at least make a case. Although unknowable at the time, underneath the maelstrom of surface events, a profound tectonic shift in human cognition was taking place. For, as we have subsequently discovered, 1989 was the first year in recorded history that humans used the word network more often than the word machine.

A Crossing in the Lexicon

This, at least, according to the lexical treasure trove that is Google Books’ Ngram Viewer, an online tool that allows users to search for the frequency of words or phrases in a digitized collection of over 5 million books.[2] If one searches for the terms, machine and network in English language from 1800 to 2000, Ngram Viewer produces the following graph. And it turns out that the lines cross in 1989.  

Figure 1: In 1989, we started using the word ‘network’ more than ‘machine’

To be clear, linguistic scholars advise caution in using Google Ngram Viewer as an authority on lexical trends.[3] There are several limitations, ranging from the inaccuracy of the Optical Character Recognition (OCR) processes used to scan millions of texts, to the increasing presence of scientific journals in later years, to the fact that each text counts equally whether it is Harry Potter and the Philosopher’s Stone or an obscure article on literary myth.  

However, the same pattern of usage appears in the archives of the New York Times and Times of London. In both cases, the 1990s were the first decade in which more articles in their respective databases featured network than machine. At more granular resolution, NYT marks the moment of transition as 1990, the London Times as 1985.[*]

OK, then, perhaps I shouldn’t force this transition too specifically upon 1989 – and all these sources of course suffer from being a record only of the published word, not of the many other channels by which language travels – but the three different sources clearly corroborate a decisive transition at about this point in history, namely we started using the word network more than machine and have never looked back.

In passing, it is worth noting two other striking features of the Ngram graph. There is the coincidence that network traces an exponential growth curve through the 20th Century, echoing the growth profile observed of many real-world networks. More sombre, it is hard to ignore that machine’s best years were periods of global conflict, prompting documentation of the ‘war machine’ and its many constituent machine parts.

Another hazard of lexical searches is that many words are multivocal or carry multiple meanings. For example, attempts to track mental health trends via the frequency of depression fall foul of this term’s usage in other contexts – from geographical to meteorological to economic – all of which introduce their own rhythms.  

This is less of a concern here because the significance I wish to attribute to the shift between machine and network is the degree to which we are familiar with and use the core ideas those words represent, even if their specific usage varies. Hence, while various modifiers can specify different types of machine and network (from war machine to love machine, from canal network to local area network), as core concepts, both machine and network are helpfully univocal, and clearly contrasting. The OED defines them as follows:

Machine: “An apparatus using mechanical power and having several parts, each with a definite function and together performing a particular task”;

Network: “A group or system of interconnected people or things.”

Oxford ENglish Dictionary

So, both speak of connection but machines have ‘definite functions’ and ‘particular tasks’, whereas networks emphasize the connectivity of things, but leave open what that connectivity might lead to or how it may evolve.

The Rise of Networks

So what does Figure 1 signify? Essentially that, in relatively short order, we started writing more frequently about a notion we call network more than a notion we label machine. There was a new concept being bandied about, capturing our attention and permeating our consciousness. Language is how we grasp the world and, while language constantly evolves, this feels like a significant adjustment of our collective handhold on reality.

Of course, usage has tracked the steady infiltration of networks into everyday life. Ngram Viewer reveals network riding into the lexicon upon a succession of technological advances, from road and rail networks to computer and TV networks to the more virtual networks of modern life.

Table 1: Date of First Mention of in New York Times

Road network1908
Radio network1926
Railroad network1930
Highway network1932
Broadcast network1938
Television network1940
Computer network1959
Social network1970
Cable TV network1970
Local area network1982
Virtual network1992

Illustrating the point that as a concept becomes familiar, its usage spreads beyond its original  technological associations, network has turned back to describe much longer-standing concepts. From the 1960s, social network grows strongly as a term, taking a bit of market share from old-fashioned community.

As networks infiltrate our world so we increasingly become network experts – few of us through any formal effort to develop expertise but, more generally and more powerfully, because we all develop a feel for networks from direct experience. For example, systems analysts describe an important feature of networks as their capacity for “positive reinforcement loops” in which certain ideas and behaviours are amplified in runaway fashion. That all sounds quite confusing, but if you ask a person on the street what it means for a tweet to go viral, their answer will reveal complete comprehension of just this concept.   

Networks have stealthily taught us other lessons, including the idea that important attributes of networks are not easily quantifiable. As legions of social media users have learnt experientially, the quantity of likes, links and friends is a dim proxy – sometimes even a contrary indicator – for how socially connected and emotionally supported one feels. We develop a sense that as measurement becomes easier in this day and age, it is increasingly easy to be misdirected by what is measurable.   

We understand, also, the two-way embrace of networks: how we bid others join for our own selfish reasons and yet find ourselves inextricably ensnared for others’ reasons. Try and disengage? ‘But what if I need to get hold of you?’ Networks are like Velcro. Or spider webs.

In their pervasiveness, these networks start to influence our identity – ‘am I the same person on LinkedIn as on Instagram?’ – and autonomy, with the ever beckoning phone and its notifications of what we should think and do next.  

Making the World with Metaphors

Though barely noticed, this infiltration of network technologies into our everyday lives stealthily inclines us to perceive the world in new ways. Changes in the real world make possible the development of a new base metaphor, or ‘world metaphor’, with which we perceive the world. George Lakoff, the cognitive linguist, has argued that we rely on metaphors to interpret our reality and that ‘new metaphors have the power to create a new reality.’

This sounds at odds with the notion of a singular, objective reality which science has keenly advanced for most of the period since the Scientific Revolution. However, we are increasingly learning that what we see in the world – both individually and collectively – depends, at least in part, on how we choose to see.

While less important for the physical sciences, the reflexivity is critical in the life sciences. For example, Frans de Waal in his book, Are We Smart Enough to Know How Smart Animals Are? argues that animal behavioural science went awry in the 20th Century because trying to understand animals in ways that made sense to us often proved poor window on them.[4] Rival camps of animal behavioural scientists observed contrasting behaviours from the same species because they were guided by different founding assumptions. They had built their respective scientific disciplines on different base metaphors of how animals might work.

The Machine Metaphor

On a larger scale, it is increasingly evident that for the last 350 years, we have built virtually all disciplines upon the common base metaphor that the Universe is a machine. Fritjof Capra, the long-time systems thinker argues that we have been in the grip of the machine metaphor since the 17th Century, when Western culture adopted a worldview that the “universe was a machine and nothing but a machine.”[5]

Rene Descartes (1596-1650), of course, is pinned as the architect of this development, though really he was more the genius herald of a cultural development that would almost certainly have occurred with or without him. Nonetheless, he crisply articulated a view of the world that split the thinking human mind from everything else in the Universe, which could be safely considered a machine, subject to mechanical laws that governed the arrangement and movement of its parts. The mind was safely detached and, so the thinking went, from that privileged and insulated redoubt could proceed with identifying the mathematical laws which made all else work. 

If Descartes had wanted backing for his perspective, Newton’s 1687 discovery of universal laws of motion and gravity was absurdly generous endorsement – a small set of mathematical laws governed the movement of all objects, large and small, across the whole clockwork Universe.  

Since then, the machine metaphor has worn many different faces. Through history, humans have instinctively reached for the high technology of the day as metaphors for their minds and for society and the Universe. Our description of the brain has progressed through various analogies: from filing cabinet to telephone switchboard and inevitably to computer. Through the 20th Century, economists viewed the economy as a hydraulic machine, characterized by liquid monetary flows. A professor at the LSE even built a 2-metre high hydraulic machine to simulate the economy and guide policy.

But, critically, these are all machine metaphors. Compare, for example, to what preceded it – Plato’s idea that memory was an aviary, Buddha’s description of the unsettled brain as a monkey mind.

Naturally, humans have not always seen the world as a machine, indeed have not had the capacity to do so until recently. Consequently, there have been prior metaphors that anchored human understanding of the world. Jeremy Lent, in his cognitive history of humankind, The Patterning Instinct identifies over a long timespan a series of such metaphors that preceded ‘Universe as Machine’, from hunter-gatherers’ perception that ‘Everything is Connected’ to an early Chinese view of a ‘Harmonic Web of Life’.[6] In his terms, human history has been marked by a succession of cognitive frames that became the shared basis for perceiving reality and so shaped values and behaviours. Our machine metaphor, then, is just the latest chapter of a much longer cultural story.

Effectively, what Lent and Capra describe of culture as a whole resembles what Thomas Kuhn famously identified for individual disciplines. Just as individual disciplines eventually run into the constraints of their guiding axioms, setting the stage for a paradigm shift in which the whole basis of the discipline changes, so, over longer time periods, human culture appears to have repeatedly bumped up against the limits of its prevailing anchor metaphor, inducing revolutionary changes in base cognition by which individuals and whole cultures organize around a new way of perceiving the world.

In a sense, a linguistic shift from machine to network heralds a meta-paradigm shift, or a cultural paradigm shift, in which we are transitioning the foundational metaphor upon which our whole scientific endeavour and social coordination rests. This feels important because, though Lent documents ancient precedents, Western culture has not experienced a change in its base metaphor for over 350 years.

The Machine Age

The machine metaphor has manifested in the reductionism that has constituted the dominant attitude within Science since Descartes’ time. Popular understanding of Science suffers from its monolithic projection – we talk of the Scientific method – but there are at least two principal forms of science and we have overwhelmingly engaged in just one not both. We need broader recognition of at least the first fault line science encounters in its work, namely whether the question being asked of a phenomenon is a reductionist question of ‘what makes up this thing’ or a systemic question of ‘what is this thing part of’?

Our strong leaning towards reductionism since the time of Descartes has been both helpful and unhelpful. Helpful, because reductionism has beneficially unlocked many of the Universe’s secrets, from the molecules, atoms, electrons and yet smaller particles of the physical world to the organs, tissues, cells and proteins of the biological world.  

But unhelpful also, because the prodigious discoveries that flowed from perceiving the world as a machine have reinforced this particular way of seeing and have postponed our discovery of other properties of the Universe only accessible to non-machine thinking. William Blake (1757-1827) was among the first to recognize the double-edged sword of the machine metaphor and the reductionism it engendered, writing in 1802:

“…May God us keep

From Single vision and Newton’s sleep.”

William Blake

The price we have long paid for the spectacular advances made possible by the ‘single vision’ of reductionism has been the postponement of understanding certain phenomena that may only be gleaned from a different perspective.

Blake was not alone, but belongs to a tradition of Western thinkers who have maintained a flame of systemic thought on the periphery of Western culture’s prevailing reductionism. Generally lesser-known than their reductionist peers – Wiener, von Bertalanffy and Bateson hardly challenging Einstein, Crick and Friedman in the name recognition stakes – they constitute a parallel universe of thinkers whose efforts to propagate their own vision have been repeatedly eclipsed by reductionist breakthroughs. In a sense, the problem for Blake and his fellow systemists is that ‘Newton’s Sleep’ has been so productive. Lent refers to such systems thinkers as upholding a “moonlight tradition”, casting a different light on familiar surroundings even as they were inevitably and repeatedly overshadowed by the glare of the dominant reductionist paradigm.

In addition, until very recently, these thinkers have wanted for adequate mathematical tools to address the more complex problems systemic thinking confronts. Scientific advance has often required the invention of new mathematical technique – Newton had to develop calculus to formulate his laws of motion – and for a long time, our mathematics has not been sufficiently developed to address complex problems.  

In short, it has long been uphill sledding for scientists inclined to a systemic view, and frankly, why bother, when reductionism was the gift that kept on giving.

The Network Age

This, then, is the historical context that a perceptual transition from machine to network might finally be changing. Strangely, two things seem to be happening simultaneously – our real-world usage of networks is increasing, making its use as metaphor more accessible, at the same time that its usefulness as metaphor seems also to be rising. This is, at least, fortuitous timing. It may be more than that.

One of the curiosities is that networks have pervaded our world because we have been networking machines! From cars to computers to phones, we invent machines which require networks to get best use from them. That real-life machines prompt real-life networks suggests that the machine metaphor has in some way prepared the ground for a network metaphor. Possibly, it was some unavoidable path to collective maturity – we had to make machines first and then network them together to equip ourselves with the intuition of the way societies and Nature work.   

Writers from Jacques Ellul in the 1960s to Kevin Kelly and Brian Arthur more recently have puzzled over technology’s seemingly innate impetus.[7] It invites personification – ‘what does technology want?’ asked Kelly – which in turn raises difficult questions about how its ‘agency’ interacts with ours. It is almost as if the sheer possibility of something being able to be built bids us to build it, which then, of course, makes possible the building of the next thing, which we promptly build in turn. It sometimes seems that we are mere contributors to an incessant combinatorial process, continuing at a more complex level what Nature initiated billions of years ago when it experimentally smashed atoms together to see which combinations would make viable elements and, equally significant, which would not. Our Periodic Table records the winning combinations then, Amazon’s virtual shelves the winning combinations today.    

However we ultimately resolve the causation at work, history records that our technological frontier advanced in such a way that we made machines and also came to think of the world as a machine, ushering in an era of reductionism. Today’s technological frontier sees us making more networks – a possible Internet of Things, no less – providing a new and widely accessible metaphor with which to perceive the world anew.  

Reductionism hitting bedrock

What makes this fortuitous timing is that, after a tremendous 350-year run, the machine metaphor is exhibiting diminishing returns in many fields of science. Reductionism has been hugely generous with its gifts – generous, but not limitless. In field after field, the spade of reductionism is hitting bedrock and being turned.

Take the case of biology. The methods of reductionism led us from the organism to the cell to the molecule to the protein and ultimately to the discovery of DNA and its 4-letter alphabet, often hailed as the greatest scientific breakthrough of the 20th century. The temptation of reductionism is that, having broken phenomena into smaller and smaller parts, causal explanation for the original whole can be achieved merely by reversing direction, and adding back up knowledge of the parts, in machine-like fashion, with nothing lost along the way. This was the basic premise of the ‘central dogma of molecular biology’, which hypothesized a mechanical process in which DNA made RNA made the proteins that made the tissues that ultimately constituted the human body. The implication was that there was a specific gene for every human function or trait.

Yet, the clear message of the Human Genome project and of recently identified processes of epigenetics and DNA methylation is that no such neat correspondence between genes and traits exists. Instead, a more complex – frankly, more interesting – picture has emerged in which the larger parts of the organism themselves play some role in determining which proteins are expressed.

As Wendy Wheeler, the English biosemiotician, puts it simply: ‘DNA is not a blueprint, but a library’, not a deterministic set of instructions for cellular machinery to follow blindly, but a repository of possibilities from which an organism can select depending on contextual demands.[8] This raises fascinating questions regarding what is doing the selecting and how, but the two-way nature of the process seems like it will yield its secrets more readily to a network perspective than a machine perspective.

Of course, the flagship example of reductionism reaching the end of a road is physics’ earlier discovery of the quantum layer. Having drilled down through the structured, ordered stack of the compound, the molecule and the atom, physics breached the quantum barrier, identifying behaviours of particles which can only be described systemically. It is as if Nature mischievously left a message at the bottom of our physical world: ‘Congratulations on making it this far! Now turn around and think the other way.’

A Time for System Science

If this is a new, meaningful, dawn for systemic science, there will be implications both for our high-level perspective on the world and the way we pursue specific enquiries.

A shortcut to glean some of the high-level consequences would be to identify a suitable presiding philosopher for systemism, to match the role Descartes has fulfilled for reductionism. Figureheads serve as powerful labels with which the human brain can grasp complex ideas. If so, a strong candidate would be Charles Peirce (1839 to 1914, pronounced ‘Purse’), who is arguably long overdue greater recognition.

Seeing the world in twos or threes:
Rene Descartes (1596-1650) and Charles Peirce (1839-1914)

Descartes’ dualism provided the philosophical underpinnings of reductionism. His splitting of the world into the two of human mind and all else served as the dualistic subject-object template for reductionist enquiry. In contrast, Charles Peirce, and his fellow semioticians – and now biosemioticians – argue we must always see the world in three: namely, subject, object and the sign relationships that exist between them. Not only does the notion of a sign reconnect subject to object after reductionism’s rude severance, but it opens the possibility that a subject’s object might be its own subject looking back. In other words, that there is a two way flow of interaction between living things, resembling the simultaneous uploading and downloading of networked activity. I am a sign for you at the same time that you are a sign for me.

If this seems opaque it is because Western culture has been so firmly in the grip of Descartes’ duality that the intersubjectivity of living things seems alien to modern Western minds. As Werner Heisenberg observed:

‘The Cartesian partition has penetrated deeply into the human mind during the three centuries following Descartes and it will take a long time for it to be replaced by a really different attitude toward the problem of reality.’

Werner Heisberg

It is impossible to mend such deep rupture with a few sentences, but one can glimpse the terrain and flag some entry points. For example, it is accessible from the familiar sensation that one’s identity is subtly different depending on the company one shares. Certain parts of our identity only arise when in the presence of certain others. Your identity is not yours alone, but an interpersonal construct that owes something to who is regarding you. From this intersubjectivity or inter-affectivity, psychologists and neuroscientists argue that certain parts of our experience or reality lie in the interaction between conscious minds. Certain feelings and experiences in our lives are held not just by us and not just by another, but can only be accessed by the two of us in conjunction. Hence, bereaved spouses effectively suffer two losses – they miss their partner as his or her own person, but also they miss that part of their own identity that could only, and uniquely, be summoned by their partner.

The acknowledgment of such ‘inbetweeness’ of living things has lain dormant in Western culture, and if it sounds otherworldly, bear in mind that it is of increasing interest to psychologists who feel that this ephemeral ‘third space’ may be the critical nexus of child development and mental wellbeing. While physical healthcare has made great mileage from ‘treating the patient’, mental healthcare may require ‘treating the patient and the context’.

Special and General Forms of Causation

Here’s the key thing about the competing visions of Descartes and Peirce: Descartes’ bifocal view of the world is perfectly adequate to decipher the inanimate parts of the Universe that are the focus of physics and chemistry. While there are lots of interesting attributes in these fields, sign relations are not among them. Dead things are network-poor or ‘sign-lite’ parts of our world

In contrast, to understand living things, which are network- and sign-rich, requires donning Peirce’s trifocals for full comprehension. Study of inanimate matter is the special case where, in the absence of sign relations, Peirce’s third lens adds nothing and can be safely dispensed with, like a term in an equation that collapses to zero and can be ignored. In contrast, general understanding of the Universe – of both living and dead things – requires a triadic perspective. Peirce provides the general framework for how to see for which Descartes dualism is an adequate special form sufficient for understanding non-living things.

It is one of the central errors of 19th and 20th Century science that we have transferred a dualistic perspective, sufficient for studying the dead parts of the Universe, to the study of living things where only a triadic view will do.  

While my playful metaphor suggests we might easily swap Descartes’ bifocals for Peirce’s trifocals, Heisenberg accurately identifies that any change of perspective must confront the deep neural rooting of our cognitive frames – grooved into our plastic brains by our habitual thoughts, reinforced by cultural context.

Indeed, sitting underneath the practical and philosophical imbalances of the machine metaphor may be the asymmetry of the human brain itself, characterized by an ever more assertive left brain, whose particular attributes bias towards reductionism. As Iain McGilchrist has argued there are two ways of being in the world and the pattern of Western cultural development suggests we have, in runaway fashion, been using our left brain rather than both brains – its use promoting a culture which rewards its further use.[9]

As the network metaphor seeps into our awareness, it will gently tease out the roots of our machine metaphor and invite new appraisal. The picture that arises of the last 350 years is one of an astonishing type of scientific advance made possible by, and so reinforcing, a particular way of perceiving the world that has nonetheless dulled our sense of the interconnectedness of things.

Our Western Enlightenment starts to feel a touch lop-sided. Despite the frequent paeans to the Scientific Revolution, the question becomes not ‘how Enlightened are we’, but rather ‘how are we Enlightened’? Intriguingly, the other major Enlightenment that merits capitalization in English – the Buddhist Enlightenment of 5th Century BC – promoted an alternative vision of cultural progress by heading in a distinctly different cognitive direction. Possibly, there’s something to glean from both Enlightenments.

The question becomes not ‘how Englightened are we’, but rather ‘how are we Enlightened?’

A Reshuffling of Disciplines

At a more practical level, a world metaphor of networks may prompt a reshuffling of scientific disciplines in our estimation. Certain mysteries of the Universe have yielded more easily to a machine metaphor than others and the associated disciplines – such as physics, chemistry and physiology – made more ground more quickly than others simply by having taken less complex matter as their focus. Inevitably, these fields were accorded primacy and prestige that resulted in widespread adoption of their methods in a way that influenced the shape of science generally.  

In contrast, other disciplines have struggled from holding to a more systemic view of the world – such as sociology, anthropology, biology and ecology. It has simply been a tougher beat to be a systemist these past few centuries. As we come to understand the influence of the base metaphor on the chronology of scientific discovery, there may be greater recognition that we have perhaps too easily dismissed systemic thinking’s potential.  

Helpfully, too, some of the technical and cultural impediments to systemic thinking are also falling. The development of mathematical techniques for complexity and the rise of computation means that more complex systems can be modelled. But, there is also increasing recognition of the inadequacy of quantification to capture some of the key attributes of networks. Bryan Goodwin, the Canadian biologist, argued we need to develop a ‘science of qualities’ to complement the ‘science of quantities’ instituted by Galileo in the 16th Century.

A network world might also encourage greater interdisciplinarity. It is notable that many systems thinkers are described as polymaths, indeed are often perceived as artists not scientists. Capra identifies Leonardo da Vinci, not Galileo, as the first real scientist – comfortable along the full spectrum of reductionism and systemism and understanding that science required different types of seeing. One ruefully starts to wonder how much Western culture was shaped by the 200-year disappearance of da Vinci’s notebooks, during which time Descartes was at work.

We tend to hold polymaths in special esteem, not fully recognizing that their rarity is more a reflection of our cultural choice to educate and train for depth of knowledge, not breadth. It is not necessarily that the best polymaths know more than the best experts, only that their curiosity inclines them to swim across the pool of knowledge rather than up and down the officially designated lanes. Our culture might be richer, our knowledge of the world more complete if we more deliberately cultivated such talent.   

One of the disciplines most likely to change – and whose change has greatest capacity to change us – is economics. Of all the major social sciences in the 20th Century, economics was most seduced by the certainties and universalities reductionism seemed to offer, and developed a full-blown physics envy, in which economists sought universal laws of human behaviour and searched for equilibria in the incessant flux of a growing economy. These pursuits, combined with distinctive embrace of mathematics, led to a regrettable severance of ties with other social sciences, including psychology, which might otherwise have proved a helpful neighbour. From 1930s to 1980s, economics and psychology were barely on speaking terms.

Effectively, it was the mistake of 20th Century economics to take the science of dead things as its guiding template, and it was the mistake of much post WW2 politics to institutionalize so many of 20th Century economics’ recommendations. Unfortunately, neoliberalism is what results when you apply scientific techniques suitable for analysing dead things to the living fabric of society and ecology, and pursue them to their logical conclusion.

…neoliberalism is what results when you apply scientific techniques suitable for analysing dead things to the living fabric of society and ecology, and pursue them to their logical conclusion.

Consequently, we will not fully be in a position to address our social and ecological challenges until we recognize they emanate not from the economic growth of the 20th Century, but from a cognitive frame shift of the 17th Century. Though we view today’s ecological problems as the product of modern living, they are effectively 17th Century problems with a lag. Encouragingly, economics is today looking to the life sciences for inspiration. There has been clear rapprochement with psychology and economics may even develop a biology-envy as the 21st Century progresses, developments which will surely remake the field, and which may yet repair our environment.

Conclusion

1989, then, an important year. Revolution on the streets and a changing of the guard in our lexicon. The year has not quite lived up to Fukuyama’s billing as the “end of history”, but it possibly marks the end of the Machine Era of Human Cognition (circa 1637-1989). Indeed, another largely unnoticed event from 1989: in March, a young Tim Berners-Lee wrote an internal memo to his superiors at CERN proposing a new means of ‘information management’ based on a ‘distributed hypertext system’. “Vague but exciting” was the first verdict cast upon what became the World Wide Web, eventually the Internet.

Of course, I don’t seriously mean to pin such momentous change into the narrow confines of a 12-month period, but it helpfully flags that something significant is occurring in the realm of human and cultural cognition. For the first time in three or four centuries, more systemic ways of interpreting the world find the winds of language and everyday experience at their back. Let’s hope it’s not too late.

Duncan Austin has spent 25 years working in the field of sustainability. He was raised Orthodox Reductionist but has recently converted to New Systemism.

Download as PDF

References


[*] More specifically, a term search of various periodicals reveals the following dates as points of crossover: Daily Telegraph: 1984; London Times: 1985; Guardian: 1987; NY Times 1990. Interestingly, the Times of India is later, in 2000, but exhibits very fast growth in the term ‘network’ thereafter – sort of ‘later, but faster!’


[1] “Michael Elliott, ‘Time’s Annual Journey: 1989’, Time <http://content.time.com/time/specials/packages/article/0,28804,1902809_1902810_1905185,00.html> . Francis Fukuyama, ‘The End of History?’, The National Interest, 16, 1989, 3–18.

[2] Jean-Baptiste Michel and others, ‘Quantitative Analysis of Culture Using Millions of Digitized Books’, Science, 331.6014 (2011), 176–82 <https://doi.org/10.1126/science.1199644>.

[3] Eitan Adam Pechenick, Christopher M. Danforth, and Peter Sheridan Dodds, ‘Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution’, PLOS ONE, 10.10 (2015), e0137041 <https://doi.org/10.1371/journal.pone.0137041>.

[4] Frans de Waal, Are We Smart Enough to Know How Smart Animals Are? (Granta Books, 2016).

[5] Fritjof Capra and Pier Luigi Luisi, The Systems View of Life: A Unifying Vision (Cambridge University Press, 2014).

[6] Jeremy Lent, The Patterning Instinct: A Cultural History of Humanity’s Search for Meaning (Prometheus Books, 2017).

[7] Jacques Ellul, The Technological Society (Knopf, 1967). Kevin Kelly, What Technology Wants (Penguin Books, 2011). W. Brian Arthur, The Nature of Technology: What It Is and How It Evolves (Penguin UK, 2010).

[8] Wendy Wheeler, Expecting the Earth: Life, Culture, Biosemiotics (Lawrence & Wishart, 2016). Page 170

[9] Iain McGilchrist, The Master and His Emissary: The Divided Brain and the Making of the Western World (New Haven: Yale University Press, 2009).