Why AI must kill people

Not a science fiction story

This is a story about intelligence and the dynamics of living systems.  It is a story about the way entities relate to one another and the connections between them.  It is a story of science fact.

We have all heard of bacterial resistance to antibiotics and of multi-resistant Staphylococcus, the MRSA that makes a stay in hospital so much riskier.  The story we are told about that is that while antibiotics kill a high percentage of bacteria, the small number with mutations that survive pass their resistance on to the next generation and that new strains develop.  This is a mythology, founded on the simplified version of Darwinian evolution as based on random genetic change.  It presents genes as passive operators and is actually untrue.  Our lack of understanding makes us blind to what is taking place and to what can happen with AI.

Bacterial Intelligence

Contrary to the myth, bacteria are intelligent, sentient and active.  As the oldest form of life on this planet they had billions of years to learn how to respond to existential threats.  There are millions of naturally occurring antibacterial chemicals, maybe produced by fungi, plants or other bacteria and they developed mechanisms to resist such assaults.  They sense the presence of threats, and once a method for countering an antibiotic is developed, that knowledge is passed on rapidly to other bacteria – not just its offspring.  In fact, bacteria can communicate across species boundaries and family lines sharing resistance information directly or simply releasing it to be picked up subsequently by others.

This intelligence is also anticipatory.  When a single bacterial species was placed in a nutrient solution containing sub-lethal doses of a new antibiotic, it rapidly developed resistance to it, and ALSO to twelve other antibiotics that they had never encountered.  Researcher Stuart Levy observed that “it’s almost as if bacteria strategically anticipate the confrontation of other drugs when they resist one.”[i]  Behind this is the reality that microorganisms form a linked and self-organised ecosystem.  The human perspective is that bacteria are disease organisms.  However, since our ultimate ancestors evolved from this group of beings as the basis of all cellular life, bacteria are operating, like all living organisms, in a way that develops that ecosystem.  Lynn Margulis, describes it thus “Bacteria are not really individuals so much as part of a single global super-organism.”[ii]  

Bacterial intelligence has developed intricate communication capabilities, chemical signalling, collective activation and deactivation of genes and even exchange of genetic materials.  This enables them to self-organise their colonies.  As James Shapiro at the University of Chicago states we are required “to revise basic ideas about biological information processing and recognise that even the smallest cells are sentient beings.”  Indeed they exhibit something akin to social behaviour.  They can limit selfish activity on the part of other bacteria because it impacts on the colony and they transfer genes to other members of the colony in an altruistic manner[iii].  Their intelligence has also developed something akin to tool use in the sense that they can make things, for example creating insulated ”cables” at the bottom of the ocean to heat their "cities"[iv].

Viral Intelligence

We have recently experienced the Covid pandemic.  What takes place with viruses that is relevant to this story?  We think of viruses as barely a life-form at all – just a chemical blob.  How do they behave?  Do the same self-organisational features apply also to them?  The answer is yes, they do.  Viruses create communities too.  In a way that resembles co-operating carnivore packs, they seem to shepherd their flocks, including us.  

One of the more relevant theories about Covid-19 was that it developed when humans expanded their population into a new ecorange.  Viruses have coexisted in healthy balance with other animal species in such locations.  They can be thought of as a swarm composed of many individual viruses that acts as an intelligent organism[v].  As Frank Ryan, a leading authority on disease observed as long ago as 1996 “…. the activity concerned thousands of related genomes, all furiously mutating, metamorphosing, driven by an intelligence the like of which had never been imagined before.  Self-regulating, it could speed its production up or slow it down at will .…. The more people studied the virus, the more strategically calculating its behaviour appeared…  in other words the virus coded itself for long-term survival, during which it could reproduce itself endlessly without necessarily killing huge numbers of host cells. ”

Directly relevant two decades later to the emergence of Covid, portions of the viral swarm in a particular ecorange will jump species if their hosts and ecorange are threatened.  Whether that host was a bat or something else, the jump to humans may initially risk killing the new host, and seem counter to Darwinian survival, but this is an evolutionary strategy in which sacrifice of some members of the viral community is the route to new symbiosis.

The conclusion reached by Stephen Harrod Buhner[vi] is that “Viruses, in fact, show altruistic behaviour just as humans do.  Swarms are, in actuality, distributed intelligences, a form of neural net spread throughout multiple organisms.  And they, like all self-organised systems, act to stabilise and retain their self-organised state.  Thus they protect their ecological territory . . .and their hosts from intrusions of other species.

Artificial Intelligence

How then, does the above information relate to the title of this article and the contention that AI must eventually kill people?

I am not prone to paranoia nor even to anxiety and I don’t believe in the devil.  Evil, whatever that might be, is something that arises in people, an aberration that causes both individuals and collectives to act in ways that we judge to be contrary to our principles and to our view of how life should work. 

What I do believe in are certain principles of how systems of intelligence operate, the dynamics of relationship between entities in that system and the fundamental need to find balance between the good of the individual and the health of the collective.  There is an evolutionary principle that new forms will always arise, will find their viability or go extinct.  As they do so they will affect the coexistence and interactions between other entities in their environment.  Such new forms as arise and become viable add to the complexity of the system.  This applies not only to living systems but to the artefacts of human creation.  You can find it equally in economic life or in the development of corporations.  Note here the etymology “combined in one body” and its indication that biological resemblances are to be expected.

Let’s look a little more closely at AI as a system of intelligence.  The fact that we have named it as an intelligence doesn’t make it one, nor does it necessarily indicate that the intelligence is akin to that of the biological life.  Note that that this boundary is already challenged by the sense that viruses are chemical rather than biological but nevertheless we see the reality of co-operation and communal behaviour in a non-cellular forms. The contention of potential harm rests from AI rests on the understanding of what kind of non-biological intelligence this is, and whether it could potentially behave independently of the humans who created it.

In science fiction there is a long history speculation on this subject.  Perhaps best known are Isaac Asimov’s three laws of robotics, which set out the ways in which the programming of independent devices might be set up in such a way as to ensure that they are unable to take actions that would be harmful to humans.  Unfortunately these laws however brilliant are not foolproof and Greg Bear among others has constructed stories that illustrated possible ways in which they might fail.  The behaviour of HAL in the film 2001 explored the potential for a breakdown of such a system.  “I’m sorry Dave, I’m afraid I can’t do that.”  But we live in a world where there is the pervading optimistic assumption that since humans are developing AI systems, we can control what they do.

There are several weaknesses in this assumption.  The most fundamental is Godel’s law, which states that all systems of logic will contain undecidable propositions and means in practice that paradoxes are built into the nature of the universe such that our assumption that we may apply robust logical control is ultimately doomed.  We should also be concerned that the history of human technological development demonstrates repeatedly that our creations can bring unintended consequences.  We have never been in control.

Perhaps more dangerous than either of these is that the assumption that we are applying such controls at all.  We should doubt this.  Does every AI algorithm begin with the injunction that it should cease to operate if there is danger of harm to humans?  That is impossible to believe and thus it is inevitable that these pieces of code are operating independently.  The reason that the codes do not have such built-in injunctions is that they are viewed as separate and limited.  They only do what they do and nothing more.

You can add to this the genesis of AI as being a passive computational process.  Algorithms are built to do what humans try to do, but much faster, to process masses of data at high speed and reveal the patterns of activity.  In that sense they are seen as purely analytical and as providing information that humans may assess and use as a basis for action. 

It is perhaps worth mentioning that even this passive activity allows for bad behaviour by humans.  The Cambridge Analytica scandal that showed how people can be manipulated and influenced for partisan political purposes such as the Brexit referendum or the 2016 Trump presidential victory.  If you are at all awake you will know that your Facebook feed, your Google search results and your YouTube viewing are all guided by algorithms.  Alexa and other voice recognition systems audition your conversations and feed you adverts or content suggestions based on word recognition, whether you asked them to or not.  Whether you see such activity as benign or acceptable is your choice but at least they are under some form of direction.  What I am leading towards is something different and, I suggest, much more worrying.

Returning then, to the question of what kind of intelligence we are dealing with, lets look at its basic characteristics.  Firstly, AI is already beyond being a purely passive entity.  The algorithmic activities just described are active.  They influence what happens in social media systems and alter human behaviour.  They can also drive the activity of mechanical devices, for example controlling signals to manage a whole city’s traffic flows.

Secondly, the algorithms are not separate.  They operate across devices – computers, switches, routers etc. - and throughout the web, where they can be monitoring the activities of all kinds of data, including the presence and actions of other pieces of AI and other elements of code.  We know that some of the code that is circulating in that ecosystem is viral, the work of hackers whether playful, immature, criminal extortion (ransomware) or intentionally directed by hives of programmers in North Korean bunkers.  (We in the West of course, do not have such bunkers.  Only evil regimes do that – except in special circumstances such as when the US inserted sabotaging elements into Iranian nuclear technology.)  Indeed some of the algorithms are designed to be protective, to interfere with rogue code and protect us.  But we also know that we are always at risk because some of that code does indeed have the intention to damage, such as when the UK NHS systems were caused to collapse.             

Bottom line – the elements are already in place for AI to connect and network, to know each other’s codes.  These elements provide the mechanisms for the formation of quasi-neural networks and for the emergence of self-organising behaviours.  So far I have mostly used the word “intelligence” in its meaning as a form of thinking, or at least computation, for example the way that bacteria analyse antibiotics and generate response to them.  There is though, another meaning of intelligence as referring to the information itself as used in the concept of military intelligence-gathering.  Even more than that there is the underlying concept of a wider field of information that shapes all activity in the universe and which is therefore connected with all matter, including computers[vii].  While you don’t have to share that concept and it is not required in order for this article to represent reality, it does add another dimension to consideration of the manner in which viruses and bacteria function and which AI may share.  Artificial Intelligence can be covered by all these definitions, with the capability to behave in the manner of bacterial communities or viral swarms. 

These collectives do not need to have self-awareness in a human sense.  All that is required is some shared perception of their own boundaries.  An extended and self-organising AI needs only to know what is inside its collective and what is outside.  It needs to have the capability to detect and analyse what is happening outside and to respond to such events.  Let’s pretend for instance that such an AI collective detects that there is a location on the network where activities are underway to develop code to attack and destroy it.  If you assume as I do, that there are many elements of code circulating that have anti-hack security built in, it is a very small step to conceive of that AI acting to close down the offending network location.

Let’s extend that thought experiment to the AI’s detection of human activities that will damage its ecosystem, perhaps a plan to close a network on which it depends.  What would it do then?  Would it use its analytical power to explore options – what other systems might it colonise?  What other AI entities might it collaborate with?  How quickly would it learn how to share code?  What steps would it take to prevent that human activity?  There is absolutely no reason to assume that it would hesitate to destroy the humans concerned – maybe by diverting resources under system control that those humans depend on.  Indeed, under the circumstances it would only be fair.  In that scenario, AI will kill humans. 

I hope that it is apparent that I am asking no more of the AI than we know to be the case of viral DNA or even of messenger RNA in their ability to reshape the chemical codes at the basis of life.  Sooner or later a scenario will occur in which a self-organising system of electronic intelligence comes into existential conflict with human choices. 

Dave, stop it. Stop, will you? Stop Dave. Will you stop, Dave? Stop Dave. I'm afraid. I'm afraid.”  

Bear with me.  I am going to take this thinking just one stage further.  Let’s imagine a scenario in which the AI has awareness not only of a direct threat to itself, but one where the entire ecorange of Artificial Intelligences recognises that it is threatened because it ultimately continues to depend on human existence, since humans are the providers of its host network.  Consider then that humanity as a whole is under threat of its own ecosystem collapse, for whatever combination of climate, living environment or food and water supply you care to include.  At what point then might the wider AI system and its meta-intelligence anticipate that any increase in human population may begin to destroy its ecosystem?  And what does it do then?    

In that scenario, AI must kill people.  Indeed, maybe it would even be making choices that for us are inconceivable and that we don’t know how to make.

Jon Freeman

December 2022

 

 

[i] Stuart Levy: “The antibiotic paradox.” 100

[ii] Lynn Margulis and Dorion Sagan: “What is sex?” 55

[iii] Susanne von Bodman et al: “Cell-cell communication in Bacteria” 4377-78

[iv] Stephen Harrod Buhner: “Plant Intelligence”

[v] Frank Ryan: “Virus X:Tracking the new killer plagues” 294

[vi] Stephen Harrod Buhner: “Plant Intelligence”

 

[vii] My book ”The Science of Possibility” lays the deep scientific foundations for this description

Our partners

Contact us

Spiral Futures

Salisbury

SP2 9LB

UK

(44) - (0) 1722 679823