What is CyberLife?
By Steve Grand, former Technology Director at CyberLife
CyberLife is Creature Labs' proprietary A-Life technology based
on the application of biological metaphors to software-complexity
problems. As software becomes increasingly complex we start to face
problems of how to manage and understand the systems we build.
However, the levels of complexity of these systems are trivial in
comparison to those of even the most modest biological systems. Why
then with all our genius, logic, and organisational abilities do we
find it so difficult to build complex systems? After years of
research it seems the reason and the problem all lie within the way
we think of and approach complex systems.
From mechanistic to systemic
Traditionally, science was all about breaking down systems into
their constituent parts. These parts would then be analysed to
reveal their structure and the functions they perform. This was the
prominent endeavour of the 19th century, and was very useful as a
method of gaining understanding about many things including simple
biology, medicine and physics. During the 20th century, our
endeavours focused on building systems, from the industrial
revolution through to the digital revolution. However, somewhere
along the way we had a paradigm shift and decided that the way to
build or model complex systems was to consider the behavior
required and try to capture this in high level constructs. Massive
rule bases were developed in order to capture the intelligence and
subtlety of human and animal behavior. Needless to say, these
The route of the problem seems to be that the abstracted
knowledge has no grounding - there is no actual physical meaning to
any of the concepts. Therefore, if the programmer of the system had
not considered a possible situation, then the response of the
system may turn out to be erratic, wrong or non-existent. Natural
systems are rarely this brittle. All animals learn from experience
and generalise. An animal will never be in the exact same situation
twice, however it has the innate ability to reason about the
similarities between its current situation and those it has
experienced in the past. The animal will then usually perform some
action that was profitable to it in the similar situations of its
past. If this is a bad thing for the animal to do, it will learn
from its mistakes and try out some other behavior if faced with a
similar situation in the future. Why then don't we base our
artificial systems on biological systems?
Well, that is exactly what we are doing with CyberLife. If we
want a system that behaves like a small creature, then we build a
small creature. We model large numbers of cells in the brain
(neurones), and connect them up and send signals between them, in a
way similar to natural cells. We model blood-streams and chemical
reactions. We model a world for the creature to inhabit, and
objects for the creature to interact with. Finally we model
diseases, hunger, emotions, needs and the ability for the creature
to grow, breed and evolve. Only then do you get a system that
behaves like a creature.
The first results form this philosophy can be seen in Creatures.
Take a look, interact with them. Decide for yourself.
A short history of A-life
A-life is not AI
We believe that true intelligence is an emergent property of
lifelike systems. Conventional approaches to artificial
intelligence do not lead to true intelligence, just "smartness."
This is because they attempt to create intelligent behaviour
without regard to the structures that give rise to such behaviour
in the real world (i.e., organisms). In the space of all possible
machines, there may be many regions that show intelligence, but we
only know where one of those regions lies - the region occupied by
living creatures. Approaching AI without regard to Biology is just
thrashing around in the dark.
CyberLife approaches the problem through simulation. We argue
that certain types of simulation can become instances of the thing
being simulated. By simulating suitable brain-like structures, we
create brains, and (given suitable inputs and outputs) those brains
will be intelligent and have minds of their own. By simulating
biological organisms in the correct way, we create biological
organisms. Artificial intelligence is not achieved by trying to
simulate intelligent behaviour, but by simulating populations of
dumb objects, whose aggregate behaviour emerges as intelligent.
Origins of Artificial Life
Though A-Life and artificial intelligence approach a common
problem from radically divergent perspectives, historically they
are closely related, both evolving from the work and research of
Alan Turing and John Von Neumann. Turing's wartime effort cracking
code in 1930's Britain initiated computer science in general and
set off a wealth of scientific and philosophical discussions into
the viability of a thinking machine. The Turing machine is a
theoretically defined computing system with an infinite tape,
capable of performing any possible computation.
Von Neumann's design for a digital computer in the 1940's was
inspired in part by his research into computational neuroscience
and theoretical research into cellular automata and
self-reproducing systems. Studying the "logic" of reproduction, Von
Neumann defined a universal replicator, a computational system
capable of reproducing any system and realised that the system must
function as both instruction and as data. He also remarked that
errors in copying self-description could lead to evolution, which
could be studied computationally.
Despite this pioneering research, computer science neglected
A-Life for many years, focussing instead on AI and cybernetics.
Computational evolution eventually developed once genetic
algorithms were formally defined by John Holland in the 1960s.
Still, the field of A-Life had to wait until the late 1980's to
achieve unity and visibility.
In September of 1987, the first workshop on Artificial Life was
held at the Los Alamos National Laboratory in the United States.
One hundred and sixty computer scientists, biologists,
anthropologists and other researchers were brought together and the
term A-Life was officially coined. The organiser of the conference,
Christopher Langton, there presented a paper which is now largely
regarded as the manifesto defining A-Life's agenda.
Return to Eden
Around four billion years ago, chemistry learned the art of
co-operation, and as a consequence life began on this planet. Since
then, the combination of random mutation and non-random selection
known as evolution has pressed us onwards and upwards: ever more
complex; ever more adaptable. Over the aeons, evolution gave many
of us more and more powerful brains, with which we gained ever
increasing control over our destiny.
A mere few thousand years ago, some of us (those who call
ourselves Humans) began to understand ourselves and our world well
enough to start to interfere with that process of evolution. First
came agriculture, where deliberate selective breeding led to
life-forms that would otherwise never have existed, such as wheat
or the dairy cow. After this, the development of scientific
reasoning led to a greater understanding of ourselves as machines.
This in turn accelerated the technology of medicine, whose power to
overturn the random accidents of evolution has now all but stopped
our own natural selection in its tracks.
Very recently, through our understanding of the theory of
machines, we have begun to comprehend and be able to manipulate
life at a very profound level indeed. We are now ready to return to
the Garden of Eden, whence we came. However, this time we will not
be mere produce of the garden, but gardeners ourselves. Human
knowledge has brought us to the verge of being able to create
life-forms of our own.
Humans have been able to generate more humans for a very long
time (and a good deal of fun has been had in the process). Never
before, however, have we been in a position to create life to our
own design. Scientists are already able to alter the genetic
structure of existing simple organisms such as bacteria, in order
to produce 'designer' life forms. We can make apples stay fresh
longer and breed giant strawberries. In the future it is probable
that they will be able to synthesise whole organisms from basic
chemicals, creating life where there was none before. However, this
is not only a long way off, it is also, in a sense, the least
profound way in which we wil be able to create life. Synthetic life
of this kind is merely 'life in our own image', yet carbon
chemistry is only one of the ways that life can exist.
We are beginning to realise, the more we study the attributes of
life, that life isn't so much a property of matter itself (so that
you can only generate living things from carbon chain chemistry),
but that it is a property of the organisation of matter. Much of
science is currently undergoing something of a revolution in its
thinking, and it seems that one consequence of this shift will be
the genesis of other classes of living things, whose minds, if not
whose entire bodies, lie within the memory of a digital computer
and eventually, perhaps, collections of networked computers.
With CyberLife, we are making our first tentative steps towards
a new form of life on this planet. Sitting in a tank, on the very
PC with which these words were written, are two small and rather
stupid creatures. One is called Eve, and the other, for reasons
best left to another paper, is called Ron. They are not highly
intelligent, and they hardly ever do as they're told (just like
children). Yet they are quite easy to become attached to, and
hopefully they will have many generations of descendants,
throughout the world. Some of these offspring, or their cousins,
may learn to do useful jobs for people or simply to keep people
entertained until the day comes when we know how to create truly
intelligent, conscious artificial beings. It is also hoped that
those conscious beings will find a place in their hearts for the
memory of Ron and Eve.
"By the middle of this century, mankind had acquired the
power to extinguish life on Earth. By the middle of the next
century, he will be able to create it. Of the two, it is hard to
say which places the larger burden of responsibility on our
- Dr. Christopher G. Langton
Once upon a time, all machines were integrated with living
things. Every plough was pulled by oxen and guided by a man; every
lathe turned by hand and controlled by eye. The Industrial
Revolution removed the need for muscle power, and the progress of
automation has reduced our reliance on human supervision for the
control of machines. However, many jobs cannot be done, or are done
badly, without a living organism at the helm: a tractor can pull a
larger plough than a team of oxen can, but unlike the oxen it
cannot refuel itself or navigate rough terrain without a human
brain to guide it. CyberLife is concerned with the re-vivification
of technology. Through CyberLife we are putting the soul back into
lifeless machines - not the souls of slaves, but willing spirits,
who actually enjoy the tasks they are set and reward themselves for
CyberLife is thus the art of creating and embedding living
things into machines, either in software or hardware. However,
underlying CyberLife is a set of more fundamental principles,
themselves quite far-reaching, that need to be understood by all
participants if the CyberLife promise is to be fulfilled.
Building machines in cyberspace
"Cyberspace" could be defined as "the location at which two
people meet when they are engaged in a telephone conversation."
This definition expands to encompass all networked communication
quite easily, and the concept of the internet as a 'container' of
cyberspace, or a channel 'into' cyberspace is now commonly
understood throughout the world. Since the emergence of the notion
of "virtual reality," cyberspace has become a broader and more
powerful concept, representing a world 'inside' a computer, and
people are now beginning to be able to walk around cyberspace, see
it, manipulate it, meet people in it, and shoot them.
The idea that a computer is a container and life-support system
for cyberspace has begun to have profound implications for
programming methodology. Originally, computers were designed to be
fast calculating machines (hence the name), and were even used to
compute such abstract operations as Calculus, despite that fact
that Newton (and Leibniz) only invented the Calculus because they
didn't have computers to iterate their approximations for them!
Once computing got into its stride, however, the notion of
'Procedural Computation' took hold, and computers became machines
for expressing algorithms, rather than merely solving
Now, expressions describe relationships and algorithms describe
processes, and both of these tools very quickly became used to make
computer models of real-life systems, for example to forecast the
weather or compute the behaviour of bridges in a wind. The
expressions and the algorithms in such models directly describe the
behaviour of those things (air masses or bridges); they don't
necessarily, however, describe the things themselves. The recent
adoption of Object-Oriented Programming turns that view around, and
considers a computer to be a device for modelling things, each of
which has certain properties, and it is the relationship between
those properties which then describe the behaviour of the system.
The modern programming paradigm is therefore one in which the
programmer constructs objects, which reside in the cyberspace
within a computer: A computer is a machine which contains
cyberspace, which contains machines.
Object-orientation is rapidly becoming popular, at least partly
because certain advantages such as portability, reusability and
robustness come along with it. However, old habits die hard, and
many programmers who lap up these undoubtedly valuable side-effects
still fail to appreciate the profundity of the concept of a
computer as a machine which contains cyberspace which contains
machines. Yet profound it certainly is.
The tools which have made science so powerful over the last
three thousand years are largely analytical and reductionist: if
you want to study something very complex, take it to pieces first,
then study the pieces; if the behaviour is too messy, develop a
simplified model, without the messy bits, then study that; if the
proper equations cannot be solved, invent some simpler ones that
can. Nobody would doubt that this reductionist approach has been
very successful. However, many people are equally aware that there
are fundamental flaws in this kind of reasoning: everyone knows
that sometimes "the whole is greater than the sum of its parts". If
you study only the parts, then you are missing something crucial
about the whole. This 'something' is known as 'emergent
As a trivial example, 'soccer' is an emergent property, which is
vested in no single soccer player. Only when a whole team is
together can soccer exist. Similarly, the mind exists because of
the interaction of the billions of neurones in its brain, yet no
single neurone could be said to contain the mind, nor even any part
of it. Only when the assembly is acting as a co-ordinated whole
does the mind exist. Moreover, the existence of the mind would come
as a complete surprise to anyone who was only given one of its
neurones to study in isolation.
Not only is the whole more than the sum of its parts, but in
general the whole cannot even be predicted by a study of the parts
alone. Therefore, science has been missing a great deal by its
reductionist approach. What is more, it has missed out on much of
what we (as more than the sum of our parts) actually find
interesting in the world. One of those things is the secret of life
Many hands make light work
Once you accept the idea of a computer as a container of virtual
objects, you have available a plurality that isn't there when you
think of a computer as a processing machine: a computer is one
machine, yet it can contain many virtual machines. There are some
very important things that many machines can do together that one
machine can't do alone.
Think of an ant, for example. A marvel of complexity and
sophistication it may be, but no ant is smart enough to design,
memorise or communicate the plan for an ant nest. There is no
master architect ant, who stands there in a hard hat and red braces
instructing the other ants on where to build, yet an ant nest is a
very complex and organised structure.
Think of a raindrop. Examine it. Construct differential
equations to describe its behaviour. Do you see Niagara in that
raindrop? Would you predict such a thing from it? Yet Niagara is
only a bunch of raindrops acting in concert.
Think of your childhood. You remember it clearly, don't you? Yet
you weren't there! Not a single atom that's in your body now was
present when you were a child. You're not even the same shape as
you were then. No thing has remained constant, yet you are still
the same person. Whatever you are, you are not the stuff of which
you are made; yet without that stuff, you would not be anything at
all. Material flows from place to place, and momentarily comes
together to be you. If that doesn't make the hair stand up on the
back of your neck, read it again.
These examples all have one thing in common, and the
implications that can be drawn from that are varied and deep. The
common feature is that each example shows a unified structure or
process that exists only because many small things, with relatively
simple properties, come together in one place and interact with
each other. The principle that Niagara cannot be inferred by
looking at a raindrop is known as emergence, and the school of
thought that attempts to capitalise on emergence by building large
edifices from many small building-blocks is known as bottom-up.
Starting from scratch
One very practical consequence of bottom-up thinking to a
programmer is how it helps with the management of complexity.
Imagine trying to define an adventure game in terms of a decision
tree - this is the top-down approach. Suppose you progress through
the game by taking decisions, always from a choice of two. The
structure to describe this would be a binary tree: the first
decision could be taken in two ways; each of those choices leads to
another decision, giving you four different routes through the
tree. Add a few more levels of decision-making and you have a tree
with 2n leaves (32 decision steps = 4,294,967,296 possible routes).
This in itself is not an especially intractable problem, even if
routes can double-back, or many choices are available. However,
imagine that we add a computer player, and that every decision he
takes affects the choices you have available. How many nodes do we
need on our tree now? Well, it depends on how the interactions are
implemented, but the decision tree is not twice as big as before,
it is many billions of times as big. Now add ten more computer
players and stir?
The great thing about living creatures is that they are general
solutions to problems. A squid may solve a dramatically different
set of problems than a mole does, but the methods of solution are
only variants, not fundamentally different from each other. Throw a
mole into an ocean and it will not swim like a squid, but both
types of creature share a common ancestor, and thus each is an
example of how that ancestor solved a different problem. The
adaptable, building-block nature of living things and the fact that
they rarely need to reinvent the wheel (because they inherit their
solutions from their parents) makes organisms capable of solving a
huge range of environmental tasks.
Living creatures generally solve only problems that are of
interest to themselves: evolution adapts them to new ecological
niches where they can thrive unmolested; brains help them to solve
the problems associated with getting food, finding a mate and so
on. This is fine for them, but not a lot of use to us. However, if
we were able to create life-forms to our own design, we would be
able to select the problems that we wanted them to solve. This has
been the failed goal of Artificial Intelligence research.
Unfortunately, AI has devoted decades to looking at the task from
the wrong end - from the top-down. The real answer lies in
emulating Nature's way and creating life from the bottom-up.
Every human being starts out life as a single cell. That cell
divides into two almost, but not quite, similar cells. Each cell
switches on slightly different genes before dividing again.
Eventually you have one hundred billion cells, performing many
thousands of distinct tasks, each doing its own little job without
being overseen by any 'master cell.' Each contributing blindly to
the working of the whole, to the emergence of us. Therein lies a
very powerful idea: general-purpose building blocks, whose
behaviour is controlled by data (genes and the local environment),
and which interact locally to produce behaviour from the whole
system that could not be predicted from, is not resident in and is
not controlled by any single one of those parts.
Creature Labs' "CyberLife" is founded on a set of philosophical
principles and assertions, as follows:
Think of a computer as a container for cyberspace; think of
programming as the creation of machines that populate that
cyberspace, rather than a recipe for the overall behaviour of the
Swarms of simple objects interacting with each other have more
power and subtlety than a single top-down structure can provide,
they also minimise combinatorial explosions.
Complexity cannot be forced; it must be nudged and cajoled into
existence. Getting the right dynamics is as much an art as a
Life is a complex network of feedback loops that allows a system
to hover on the brink of chaos, a regime which is capable of
The best, if not the only, way to create living systems is to
create models of the building blocks from which existing life-forms
(copyright 2000 CyberLife Technology
Note: Steve Grand is no longer a part of
Creature Labs nor participates in the development of the Creature
series though continues research and development in the field of
A-Life along with his wife, Ann, at Cyberlife
Research in Shipham, Somerset, England. A description of
Steve's latest book, Creation: Life and how to Make it, can
be found at Harvard University