The zeitgeist is pregnant with new ideas. Our minds are much alike. Our world is so linked up, that we swim in the same pool. Subject to much the same stimuli, is it any surprise that we often originate similar creative ideas and insights? For practical purposes, it is not having a new idea that matters but exploiting it: getting it out there first, so that you can claim ownership of it—and this is a race where some people have a head start.
As an example, here is an idea I jotted down in 2010:
20_8_10
Mould: seed
a book in the ‘Day of the Triffids’ mode
in which disaster progressively overtakes humanity
in this case mould
it’s origins not clear
it could be genetically engineered
or some spores that have escaped from melting ice
either way, what they do
is comparable to those moulds
that take over insects
controlling them to climb to height
then fully take over the host
spreading spores to continue the infection
>have to check that this has not been done before
>this becomes a sort of alternative zombie film trope
lifted, unedited, from my notes
When I was told that this is the core idea behind The Last of Us (2023)—now streaming—I was spurred to write this piece.
The Last of Us is based on a computer game of the same name from 2013. And when I mentioned to a friend that I had had that idea many years back, they pointed out that ‘mouldy zombies’ was also the core conceit of The Girl with all the Gifts (2016)—I had watched that film but, for some reason, hadn’t noticed the mould. Admittedly, in these computer games/films, the mould just seems a pretext for how zombies come about, and the infection is passed on, in the traditional way, by the infected munching on their victims. My story—that I never wrote—is more closely based on the real-world inspiration: the fungus that takes over the brains of ants—the infection is spread by spores, so that my scenario is more devastating: you can’t hide from it behind a wall or by keeping your distance.
the impact of AIs
The ideas we’ve been talking about here are, essentially, relatively uncomplicated mashups: the combining of things, already out there, to produce something new. Since it is just this sort of mashup that the new AI systems, daily emerging, can do with industrial speed, the race to own such an idea is increasingly one that humans are unlikely to win.
root inside
But do not despair. AIs are hollow: they lack consciousness. We humans not only have consciousness, but a version of consciousness that is unique to each of us. Thus, the deeper the roots of an idea burrow into our inner selves, the more likely it is to be unique and irreproducible by a machine.
Art is no longer going anywhere, it has arrived: we have explored the artspace available to us as fully as the surface of our planet.
From small beginnings, isolated in time and space, artists explored portions of the human artspace in response to the particularities of where they lived. As these fragments assembled into local topics and art styles, they eventually fused to form the art culture of a civilization. In a process naturally coinciding with global ‘exploration’, these civilizational art cultures were absorbed into that of the West—the ‘explorers’ who brought about globalisation. Stimulated by these traditions, further energised by developments in other spaces (the science space, for example), various art ‘movements’ exploded in accelerating succession. In the 20th Century, the limits of the human artspace were reached. On one hand, concept became divorced from craft: on the other, an ever broadening community of artists was spawned whose work is a mashup of everything that has gone before. Theoretically, there is nowhere left to go.
The human artspace was always going to be one of the more limited ‘spaces’ that we explored, because it is a realm wholly dependent on the human mind and if, as I believe, this must tend towards a limit, what comes out of it must itself tend towards a limit. This is not to say that human art is not an infinite project, just that this infinity is constrained within a limit.
The advent of AI-generated images is now industrialising the process of filling in the gaps in the human artspace—somewhat analogously to how satellites are filling in the gaps in the map of our planet. These AI systems—by completing the separation of art concept from art craft—are likely to widen the category of ‘artist’ to take in all of humanity, returning us to a time where all art is personal: a redemocratisation of the process of making art.
a caveat
This democratisation of the process of making art depends on the AI-image generators being readily available; if we allow the groups—who built them by plagiarising human art—to sit as rentiers on the sum of human creativity, then human creativity will have been privatised.
other limited spaces
A larger realm than the human artspace is the space of human science—being that its object of study is the Universe, it obviously goes well beyond what we can imagine. The largest realm of all is the Universe itself, the exploration of which is limited by physical constraints—the distance between stars, for example, or the speed of light. Ultimately, our exploration of both of these spaces will be limited by the limit on our capacity to understand.
If you feed a prompt into an AI image generating system such as Midjourney, who is the artist? Most people seem to want to give the credit to the AI, but, depending on how you go about it, I believe that the Prompter can claim to be the artist.
At first glance, it does seem that all the Prompter does is to give a description of an image to the AI, but that the resulting image arises entirely from the ‘imagination’ of the AI; it would seem that the strongest claim the Prompter can make is that they are acting as an art director.
Like an art director, the Prompter gives a brief to the AI and, like an artist, the AI returns an image for inspection. If the Prompter were a typical art director, they might ask the artist to make some changes, but no art director would expect an artist to make dozens of images, as they reject one after another, giving the artist an evolving brief each time.
A better fit might be that the Prompter is acting like a film director. A film director may work on a script, may even act in their film, but mostly the actors do the acting, the camermen operate the cameras, lighting men light the scenes, specials effects people craft the special effects. Nor does the film director make the sets, nor the costumes, or carry out all the other myriad tasks that are required to make their film. Instead, they make their team go through as many takes as necessary to achieve their artistic vision—revising scenes with the actors, asking for the camera angles to change, to have the scenes relit, to ask for new scenery and new costumes—until they are satisfied. Even though they do so little themselves directly, do we deny that a film director is responsible for the artistic vision of their film?
With the endless ‘takes’—the endless finessing of the ‘script’, the endless choices of which image to vary to find another, the choices of which contributors to reference, which style to follow—the process of arriving at an AI-generated image seems more like a film director at work than it does an artist. Is it then unreasonable that a person should claim artistic credit for the image that emerges from this exhaustive process, a process that is directed by them using their artistic vision?
In the 1980s, in the Neolithic of the computer game industry, myself and three friends, Mark Wighton, Dominic Prior and Phil Mochan—working under the name Torus—converted the highly successful game Elite into Z80 versions for the ZX Spectrum and Amstrad CPC. So, when Rory Milne phoned me up to ask me about it, I dredged into the murky depths of my memory and told him as much as I could remember. This is the consequence, an interview with me and an in depth description of Elite in its many incarnations that appeared in the August edition of Retro Gamer.
For a while now, the Digital Revolution has been industrialising creative production: with the advent of AI-based systems, it is now industrialising creativity itself.
Historically, creative production has been restricted by limits on resources, on talent and on the time required by humans to create something. Wealth—and ours is the wealthiest society by far that there has ever been—has steadily eroded these limits.
Resources—whether of materials, of developed creative skills, or for supporting creatives and distributing and marketing their creative goods—have become increasingly abundant. Our wealth, acting upon our vast population, has released a flood of human creativity, making a lifestyle as a creator viable for many; our appetite for creative goods is insatiable. The Digital Revolution has also supplied creatives with new tools and direct, fast connections to distribute their creations: even a child, sitting in their bedroom, can now create pictures, music, writing, video and share them worldwide at the speed of light. We may well be living through a New Renaissance.
But, though these new digital tools have widely enabled creative output and are greatly amplifying the abilities of creatives, the inherent limit of ‘human time’ has resulted in only a geometric growth in productivity. Emerging AI systems, however, are now pushing the ‘envelope of digitisation’ up to the border of human creative imagination and are even making incursions into that ‘sacred realm’. Because processes within the digitisation envelope are indefatigable and can be continuously accelerated by technology, AI systems are breaking through the limit of human time into machine time; the resulting rise in creative output will be exponential.
the disappearing value of photographs: an object lesson
The more photographs we take, the less we look at them, though, ironically, we see more photographs than ever before; the number of photographs demanding our attention has increased so fast that, even were we to spend every waking second peering at them, there are always more to look at. It has become harder and harder to linger on a single photograph. The more photographs there are, the more worthless each becomes.
valueless abundance
Industrialisation has greatly increased the availability of ‘things’—both necessities and luxuries. The resulting abundance—if we ignore the attendant costs of climate change, pollution, ecological degradation, species extinction etc—has improved the lives and enriched the life possibilities of ever more people. Industrialisation of creativity is similarly going to produce an abundance of creative products. Whereas an abundance of necessities may be seen as a blessing for humanity, an abundance of creative products is likely to be a disaster. The AI systems will industrialise the skills that creatives once acquired through years of practice, and the possessors of such skills, so recently indispensable to the creative industries, are like weavers were at the dawn of machine looms. As for the creative products themselves, like digital photographs, their abundance must surely make them increasingly disposable.
the sting in the tail
Though it is destroying our world, the industrialisation of things at least has provided us with food and clothes and houses, making so many of our lives richer: by contrast, the industrialisation of our creativity—bought at the same price—is raising a tsunami of creative ouput that, by its very abundance, will wash away the scarcity that was one of its chief values to us
adjunct: NFTs an absurd pretence at scarcity in abundance
NFTs were already absurd when they sought to persuade people that a digital object—copies of which are identical to the original—could somehow replicate the scarcity of, say, an oil painting by attaching an unique cryptographic label to what is effectively an infinite series—how much more so are they when digital objects will be so abundant as to become nearly valueless?
Rather than burying this conceit in some science fiction story, I am presenting it here for your interest
If we knew all the operative genes in an organism’s genome, it would be possible to assign to each a unique prime number and, if we multiplied all these together, the resulting product would be a number—let’s call it an Organism Number—that defined that organism. Further, we know that such an Organism Number would define an organism’s genome uniquely, because the Fundamental Theorem of Arithmetic, proved long ago by Euclid, states that every integer greater than 1 can be represented as a unique product of primes: that is, given any Organism Number, its factorisation would produce a unique string of primes.
All the organisms that constitute a species, must have a subset of their genomes—a subset of genes—that they hold in common. Returning to our primes, the product of this subset would form a Species Number. If you divide this number into any Organism Number, and the remainder is zero, then we know that that organism is a member of that species.
But this sort of operation can be applied on many levels. More contentiously, for example, if we could find all the genes that are involved in a particular metabolic pathway, the product of the primes for those genes would give us a number that characterises that pathway. Again, if this number divides into an Organism Number with a remainder of zero, we can expect that organism to have that metabolic pathway*. Similarly, it might be possible to derive a number for an organ, or a specific anatomical structure such as a fin, or a feather, or an eye. There might even be a number for a disease, certainly we would expect there to be one for a genetic disorder.
What I am proposing with this conjecture is that, if it were possible to fully count how many operative genes each sort of organism has, we might be able to investigate the relationships between organisms, both living and across time—and on every level, from structure to chemical pathways to disease—entirely numerically, using computers.
*of course, it could be more complicated than this—an organism could have all the genes to enable a particular metabolic pathway but use another one instead
adjunct one: a wave in the sea
This notion of representing genes with prime numbers came to me when I was thinking about familial relations, and my current preoccupation with the nature of descent and inheritance (the illusion of lineages, for example).
A person’s Organism Number could be seen as the numerical representation of that person—though, strictly speaking, this number would only represent a person’s genome, not the phenotype that the circumstances of their life sculpts from that genome, and there is the further complication of epigenetics. But let’s not make this too complicated, for my daydream, it did not seem unreasonable to suggest that a person’s Organism Number represents much about that person that their family would recognise as being what makes them one of their family.
What I was reaching for with this metaphor was how, as we follow familial descent, through parents, to children, to grandchildren to great grandchildren—we focus on the familial similarities, though these are slowly fading down the generations, so that what is really going on—under the hood, as it were, and in the context of this piece—is an exchange of prime numbers; that it’s all about the numbers, the ’selfish genes’; the Organism Number. The identity of a person, is as transient as a wave in the sea.
adjunct two: Gödel Numbering
This is, of course, a simple application of Gödel Numbering to living things
adjunct three: a refinement
The prime corresponding to a given gene might occur twice in an Organism Number, because that organism could carry a copy of that gene on each of two paired chromosomes.
A further refinement might be to give the alleles of a gene—the varieties of a given gene, such as those that code for brown and blue eyes—the same prime number, but to express each allele with a different prime power. So that, for example, a gene’s alleles might be represented as P, P3, P5, P7, P11 etc. The reason I have omitted P2 is because that would occur when you have P on each of two paired chromosomes, which would appear as P x P in an organism’s product of primes.
It is not the eyes alone that are the windows to our soul but our faces, whose fluid expressions unconsciously reveal our stream of consciousness, so that, to hide our souls, we resort to putting on fixed expressions, as masks.
In mirrors, people seek to see themselves as others see them. But, as fairytales tell us, mirrors lie. We can never see our face in a mirror but only our masks. As we fix our eyes on our reflection, it becomes impossible even to glimpse how our face looks when our eyes wander freely, or when it reacts to our thoughts, or to other people, or to the world. Mirrors are more insidious even than that, for they show us a face radically different from that which others see.*
Though photographs and videos can show us our face free from the misrepresentations of mirrors, they too mostly lie: unless manipulated by a skilful hand, a lens does not see a face as does an unmediated human eye.
Mirrors, photographs and video, by encouraging self consiousness, coax us into putting on masks. People in the public eye—filmstars, famously—are forced to live much of their lives behind their masks. Since the advent of the smart phone, alas, we all now live in the public eye—especially the young.
The curse of self-consciousness is that we turn in on ourselves; the more we wish people to see our face as we want them to see it, the more we trap ourselves behind ours masks.
adjunct: unconscious masks
Some masks we wear unconsciously, most of which were adopted when we were children. Some were passed on culturally—when, for example, we unconsciously began mimicking an expression of a person that we admired or found funny. Others, we inherited from our families as we unconsciously mimicked the face of a parent as they expressed a strong emotion or did something as subtle as chew their lip as they concentrated on something. Because our face naturally resembles those of our parents, these unconscious masks may be nearly identical to theirs. Such masks may descend to us from who knows what ancestor.
*unless someone is standing by our side looking at our reflected face as we look in a mirror
When Blaise Ancona approached me to appear on his podcast, I was only too happy to do so. He had been reading his way through the Second Edition of my Stone Dance books, so that was mostly what we discussed
When Deep Blue defeated Gary Kasparov, though computers had been closing in on Chess for some time, it still came as a shock. In retrospect, we realised that our mistake had been in believing that Chess was a peak of human intellectual achievement; in truth, the reason Chess is hard for us is why it is trivial for computers: the ability to explore deeply the branching possibilities ahead.
I remember drawing comfort from how badly computers played the ancient Chinese game, Go: even the most powerful computers could barely overcome amateurs. Unlike in Chess, the possible moves in Go branch so thickly that even the brute processing power of the fastest computer was quickly overwhelmed. Human players, citing ‘feel’ and ‘intuition’, easily defeated their silicon opponents.
When AlphaGo, an AI ‘deep learning’ system defeated Lee Sedol, one of the strongest players in Go history, I was not the only one who shuddered as another wall of human intellectual exceptionalism came crashing down. In the intervening years, the rapid advances of ‘deep learning’ systems—designed somewhat analogously to mimic how we imagine networks of neurones in our brains to work—have encroached on many areas that once we considered the unique preserve of our intellect.
A few weeks back, Midjourney—another ‘deep learning’ system—erupted into the lives of many artists. When Midjourney is fed image and text ‘prompts’, it generates images that can be intoxicating in their beauty and variety, and it does so in seconds. As we have watched other computer algorithms encroach on so many human professions, I along with—I imagine—most creatives, had felt relatively safe within our enclave: sure, many of us had been using computers to make our art, but these were merely tools—Midjourney (and there are many other such AI image generators out there now) has burst through onto our hallowed ground to compete directly with us: clearly, it is making images at least as good as those produced by many commercial artists and in a fraction of the time, for a fraction of the cost. Even as those of us, who are predominantly non-visual artists, attempted to regroup behind the walls of our sub-enclaves, people were pointing out that AI systems are already composing music at least as well as humans can and that, soon, even books and poetry must fall. The machines are coming for us all.
At this point, I wish to draw a line behind which we creatives can make a stand: irrespective of how powerful these AIs become, I don’t believe that they can master everything we can.
Consider a simple narrative. Narratives naturally contain chains of causal events—these require a writer to understand the world enough to know why it is that one thing causes another. To make the characters in a story speak and act, a storyteller must ‘inhabit’ each one as if he were an actor in a play—this requires a writer to understand what it is like to be a sentient being. To understand something, you need to know what it is you know; you need to be conscious. Whatever else computer systems have achieved, they are not conscious.
Let us return to Midjourney for a moment. An image is more forgiving than a narrative—mostly, it depicts an instant in time, thus naturally excluding causality, and it only depicts the external appearance of sentient beings, not directly their thought processes. Nevertheless, even an image often requires an understanding of what something is to be able to depict it. In the image below (one of the first that emerged from my Midjourney adventure, from only the prompt of an image I had drawn myself) it can be seen that Midjourney does not understand what a face is, or hands.
It is easy to ascribe to computer systems some kind of understanding, after all, many of the things these machines are now doing, in our lived experience, only sentient beings have been able to do. The recent example of the employee who became convinced that the Google’s AI chatbot he was engaging with was sentient, is a salutary example; this despite the software doing nothing more than relating words to each other, so that it was as if he were conversing with a person who had had all of their brain removed except for some of its language centres.
I propose that consciousness is the citadel that remains to us inviolate by AI systems, and that this citadel will stand and fall on whether and when machine consciousness becomes possible. Despite what many say, no one knows why it is that the meat in our heads generates consciousness. The claim that we are moving towards machine consciousness is based on the belief that our brain is a computer. This is nothing more than a metaphor—before we had computers, there were people who believed our brains might be complex clockwork. Without knowing what causes consciousness, the techno-optimists out there wave their hands, saying that when computers become complex and fast enough, consciousness will ’emerge’ spontaneously. This is not science but faith. I believe there are good reasons why machine consciousness may never be possible; be that as it may, what is certain is that machine consciousness is not here today.
What then is Midjourney and its ilk? They may well be a amplifiers for human creativity. As chess players have enhanced their play with the aid of computers, so artists may enhance their image making by using systems such as Midjourney. These AIs may perhaps even enable artists to explore new vistas that they have not been aware of hitherto.
The example of AlphaGo is instructive. As it explored the restricted space of a Go board and its simple rules, unencumbered by human teaching, it made a discovery that humans, over three thousand years of play, had never imagined. All Go players who have ever lived had inhabited a single peak of possible play, the climbing of which led to excellence. Descending from this peak, the quality of their play diminished, so that no one had ever thought to venture far enough across the desert of this poor play to discover the neighbouring peak, undreamed of, that rises to even loftier heights.
Unlike the fisherman who inadvertently released a vengeful djinn from the prison of its bottle, we are unlikely to be able to trick our AIs back into it; instead, like Aladdin, we must risk asking our djinn to use its dangerous, prodigious powers to fulfil our wishes.
Recently, I became conscious of a tendency I have to think that I am part of a ‘lineage’: an instinctive feeling that I am on a path, with my ancestors behind me and, if I had children, that they would be on the path before me, walking into the future, with further on, further than I can see, more of my descendants. I suspect that this is not an uncommon way to think, but if so, I believe it to be a misconception—perhaps an unfortunate one—for various reasons.
We all know that, behind us, our ancestors proliferate exponentially. We humans are famously bad at appreciating exponential growth. The fable about the inventor of chess comes to mind. When asked by the Emperor what they wanted as a reward, they said a grain of rice on the first square of the board, two on the second, four on the third and so on. Is that all? asked the Emperor—not realising that he was being asked for enough rice, so the tale goes, to cover the whole of India up to the waists of its inhabitants.
We each experience our family as if we were standing in a spotlight. Our parents, our siblings and our children are fully in the light with us. Also with us are our grandparents, our grandchildren and the children of our siblings. If we are lucky, great grandparents, great grandchildren, the grandchildren of our siblings, stand, half-lit, at the edge of the circle of light. We may even be able to just make out our great-great grandparents as shadowy figures beyond the circle of our experience and memory.
That I might assume that I stand on any sort of line of descent might be because of the way generations follow each other in a somewhat regular, stepwise fashion. But, there is no line—only six generations back, we each have sixty-four ancestors, each contributing 1.5% of our genome, which is the same amount as each non-African carries of Neandertal DNA.
There are other sources for the conceit of a ‘lineage’, not least, ones that are related to inheritance of power, status and wealth—a line of royal descent, for example—but are these attempts to draw a line of descent more than an arbitrary or ideological choosing of one from a nearly infinite number of possible lines?
That I thought of myself as being part of a ‘lineage’ now seems to me not only fanciful, but harmful too: lineages suggest that families are moving along separate, parallel tracks, that some people are fated to good fortune or to bad, that the childless are not contributing to the future.
Isn’t the truth that we each emerge from a mass of ancestors we know next to nothing about, even as our descendants are destined to disappear into a mass of people that we can know nothing about? There are no lineages, just a mass of genes from which we leap, momentarily, like a dolphin out of the sea, only to disappear once more beneath its waves.