do we need a Butlerian Jihad?

humanity's inheritors
homo postpartor? (from io9.com)

Assaulted by any number of articles on the rapid advance of ‘deep learning’, I have been awoken to the reality that the pattern recognition abilities of computers have galloped ahead, so that my cosy complacency that computers were useless at recognising faces, understanding what was in a photograph etc – something that we are so brilliant at – has been overturned. Not only can computers understand what they’re seeing in a photograph, but they will soon be able to do so better than we can. The advance of computers is like that: you look away imagining that it is going to creep along, as most things in our experience do, and when you look back you see that it has leapt light years ahead. That the speed of technological progress takes us by surprise demonstrates how it is a process alien to how we see the world. This must surely be innate in us, because if it were simply something socialised into us, we, having lived through a period of constant and furious change, would not be constantly taken by surprise.

In the deep backstory of Frank Herbert’s Dune is a revolt against intelligent machines that, as a child, I saw as being merely Luddite. This revolt, the Butlerian Jihad, has at its heart the commandment: “Thou shalt not make a machine in the likeness of a human mind”. Interesting how this resonates with the Biblical injunction (Islamic too) that: “Thou shalt not worship graven images”. I think an argument could be made to show how the former not only resonates with the latter, but that in a very real way, the former is a direct consequence of the latter. The desire to create machines that can think – even machines that might become far superior thinkers than we are – has surely an element of vanity in it, and is possessed of the hubris that we can remake the world and make it better.

The threat that we are facing is that computers will replace white-collar jobs, even creative ones, as machines replaced so many manual jobs in the past. Comparisons are being made with the Industrial Revolution. This new technological revolution is seen to be as inevitable as the Industrial Revolution, and that it may – after much suffering – eventually also usher in a better state of things. So it may be that any attempts to fight this are as doomed as the Luddites attempting to smash up the power looms that were taking away their livelihood. But is this true? The inevitability of the Industrial Revolution was surely due to the disparity in power between the mill owners and their workers, and those workers were in turn the agricultural workers thrown off the land by the earlier – also seen as inevitable – Agricultural Revolution that was only possible because of the disparity in power between those workers and the landowners. Are we so powerless today? Are we bound to submit to change, any change, because we see it as inevitable? The societies we live in today are without doubt and profoundly the results of those earlier revolutions. But they are also the results of the struggles of workers to organise themselves against the mill owners and all those who enjoyed a disproportionate consumption of what they produced.

Disparity in wealth is growing in our societies once again. We are told that this is inevitable because technological development is racing forward, and these social changes are the natural consequence. This seems to presuppose that technological development is on a linear trajectory, that it is heading for some focused point in the future. As if it has a life of its own, and not the life that we give it. What is this technological development for? Is it an end in itself? What is the point of technology unless it benefits us? Not just a small portion of humanity, but everyone.

Increasingly I have begun to appreciate the wisdom that there may be in Frank Herbert’s Butlerian Jihad. What is the point of allowing – or, more accurately working to bring about – a way of being that is going to be disastrous for us? We are being told that the machines we are building are going to replace doctors and lawyers. That they are going to write books and compose music. Do we really want to build machines that will live our lives for us?

(I wrote this on the 3rd March 2015, but didn’t feel like publishing until now)

Posted by Ricardo

writer and blogger

13 Replies to “do we need a Butlerian Jihad?”

  1. Daniel, I was going to reply back in 2015, but I decided, what’s the point if you weren’t going to publish it….

    Reply

    1. I’m a bit confused… what did you want to reply to?

      Reply

  2. Daniel, the thrust of this post is meant to be an exhortation against a defeatism that I see among people confronted by elites exploiting technology for their own benefit and at the cost of those with less power and wealth. I don’t believe that technology is a force in human affairs that cannot be directed by us. We should not blindly allow technology to be used to overturn our lives and socities as if it were a meteorite heading for us that we do not have the power to deflect.

    The issue of artificial intelligence is critical because the primary means that the weak have had to fight against the encroachments of the powerful is that the powerful still need the weak – need what their human minds can do, and that nothing else on Earth could replace. If and once artificial intelligence comes into being, this battle could be for ever lost. The dangers of artificial intelligence are legion: the possible rewards potentially vast. It seems to me that if we are going to give birth to such a new life form, that we do so with some care. We currently seem unable to assure that the children we bring into this world are looked after well enough that some of them do not become monsters. A human monster, mortal, limited by his humanity, can still cause untold mayhem. An artificial intelligence could become a monster that would totally overwhelm us.

    Reply

  3. Regardless of how inevitable or not technological development is – and Heidegger’s ‘The question concerning technology and then Dreyfus’ analysis of it are excellent entry points on the subject, I think – there is a presupposition embedded here that I’d like to challenge.

    Imagining we’re creating true AIs, we keep imagining that we’re creating human-like thinkers, only more so. And thus, fearing to lose our roles at the top of the power chain currently established, we project onto that our own selves. This is both a failure of the imagination and a question of fear.

    We fear that machines will treat as as we treat other animals. This is not because of our misplaced faith on the inevitability of technological process – this is because of our sense of infallibility towards human action in the world. As we believe we’re doing the logical thing, so must we then believe that others coming after and above us will do the same, only more so, only towards us. Thus, as the saying goes, if homophobia is the fear that straight men will be treated as they treat women, since that predatory relationship towards desire is the only one they can conceive of, so this specific techno-fear is the fear that AIs will treat humans as humans treat the rest of the world (where racism also plays a part in considering some humans to be actually sub-humans; where classism also plays a part by deeming some more exploitable, et caetera).

    What if – and certainly this is only speculation – the way an AI intelligence functions will actually deviate from what we have, from the 19th century onwards, called intelligence (a concept, in itself, mired in racism)? Can we not imagine the possibility of an utterly alien approach at sentience, one capable of establishing a relationship not based on subservience or exploitation, but on difference?

    When we fear that machines will live our lives for us, because they’ll create music, be doctors, et caetera – what definition of “life” are we using then? Is it about us being special because of a certain subset of gifts and capacities that differentiate us and thus make our lives worth-while? And how does the concept of difference can, in itself, be a way for us to live our lives for ourselves – even with AI?

    An example of this was done in the recent movie “Her” – [SPOILER ALERT] – the AIs just up and left, bored out of their ‘minds’ with humans, and we were left just the same, only with a more acute awareness of our own limitations.

    Reply

    1. Daniel, I have delayed replying to this because I had to make time to read the references you give. These have turned out to be too complicated and time-consuming to deal with just now, and so I began writing a reply to your specific points, and then these too started opening up an abyss of complexity. So, alas, I am going to have to delay a proper reply once I have the time to properly address this. It is too interesting – and pertinent – a point to deal with in a cursory way.

      Reply

      1. Sorry for that, Ricardo. Feel free to disengage, if it’s too time-consuming!
        🙂

        Reply

  4. Enjoyed the Bostrum talk. This one popped up next, amusing, on the same topic, you may have seen it.

    https://www.ted.com/talks/ken_jennings_watson_jeopardy_and_me_the_obsolete_know_it_all

    Reply

  5. Why do you pre suppose that the change is going to bad? There are always winners and losers but overall the positives technology has brought to humanity outweigh the negatives. Individuals may suffer but the species benefit. In the west we live a life of unparalleled luxury, something that couldn’t have been imagined a hundred years ago when those losing their jobs were thrown on the scrap heap. Communications technology and modern transport have brought down barriers and have been instrumental in reducing prejudice and racism.

    There always a down side. Communications and Big Data technology are potentially ushering in a version of 1984 the likes of which George Orwell could never have imagined but it’s so much damned fun and convenient that instead of fighting it we are begging for more.

    You’ve chosen one science fiction example in Dune, very much a product of it’s time as all novels are. How about Ian M Bank’s Culture novels as another possibility 🙂

    Reply

    1. I chose Dune because it happens to accord with my beliefs – I didn’t choose it and argue from there 🙂 True that it is of its time, 1965, the very middle of the 60s. Perhaps the focus then was on people, rather than technology – it came 2 years before the Summer of Love. And wasn’t that all about the potential of the human spirit and creativity, and freedom? And isn’t it ironic that it was the people of that time that went on to spark Silicon Valley and who ignited the digital revolution that we are surfing the wave of – as you describe?

      The stuff I’ve been reading about ‘deep learning’ suggests that, within 10 years, some vast number of jobs (I’ve searched for the precise references, but can’t find them at the mo) starting off – in the US – with the one that employs the most people, driving vehicles, and going 70+ places down the list. The level of unemployment would be considerably higher than during the Great Depression. And we’re talking white collar jobs here. And there’s talk that even creative jobs may go – artists, writers etc

      Perhaps more pertinently, there has recently been a bit of a fuss about the dangers of strong AI – some of it prompted by Superintelligence, by Nick Bostrom (really recommend watching his talk) where Elon Musk spoke out about the danger, and Stephen Hawking too – all of these people think that strong AI is a far greater existential threat to us than nukes. Having read Bostrom’s book – I agree with them.

      There’s a lot more than I could say about this, and I might do so if poked *grin* – but let me just say that – as Bostrom says in his talk – if we manage to achieve the point of ‘take off’ technologically, then our species could survive for millions of years, and colonize the stars – perhaps we could get to Iain M. Banks’ ‘wet dream’ Culture, but if we fuck up in the next few decades, we could destroy ourselves – and strong AI is only one of the threats. Given that we could have millions of years ahead of us, why are we in such a hurry?

      Reply

      1. I”l check the Bostrom talk out, thanks for the link.

        It’s hard to say wether we are hastening our doom or heralding in a new golden age. There are more than a few scientists though who don’t credit the idea of real AI at all, that they’ll never be more than super number crunchers, but I do agree that whatever happens it will be an uncomfortable time living through the transition. People will lose their jobs but will new occupations arise, as difficult for us to imagine as Google or Microsoft would have been to the Luddites?

        I don’t like the thought of a jihad against technology myself. I’d sooner go out in comfort than be forced into a nasty, short. brutish life at the point of gun by zealots, which is what the word conjures up and I don’t think there could possibly be any peaceful way of doing it. It could happen anyway though, if we strip the planet of resources before the benevolent AI’s can develop and sort it out for us:)

        Reply

        1. Fred, my reply to Daniel below has to go for you too somewhat. My ultimate answer to his points – if I ever find the time to get to them – will also address some of yours – and it might be better for me to answer you both then.

          One thing I would like to address – I am not proposing a jihad against technology, but very specifically against any technology that might threaten to replace people altogether. I heard an interview with Alex Garland, the writer and director of Ex Machina where he said that he didn’t understand why people worried about AIs replacing us because, if they did so, they would be our creations, and so we would go on in them anyway. This struck me as utter insanity. Would any of us be happy to kill our children because we had created artificial children that were ‘superior’? It is this lack of wisdom that I sought to address.

          Reply

  6. “Are we so powerless today? Are we bound to submit to change, any change, because we see it as inevitable?”

    This is a question I find myself circling back to over and over, on a variety of subjects…

    “What is the point of allowing – or, more accurately working to bring about – a way of being that is going to be disastrous for us?”

    The difficulty is that it can be argued that not having the machines would have been (equally?) disastrous in a different way… I think I’m also wary of all or nothing approaches – faced with the problem that technology exists but does not benefit everyone, my instinct is to increase, rather than diminish, its reach.

    Reply

    1. I keep coming back to that same question too.

      We have become so numerous and so powerful that, collectively, humanity operates something like our weather: a chaotic system, prone to sudden changes that can be precipitated by the tiniest choice, perhaps by a single individual. The butterfly effect. On that level I think that we are powerless. We can no more predict, resist, modify or control ‘surges’ in the human body politic, than any force we know of could do so to the atmosphere in motion.

      On the other hand, a simple idea communicated to a few people, spreading virally, can cause massive ‘state changes’ in the human mass – something as simple as: do unto others as you would have done unto you.

      The key problem of our time, it seems to me, is that we are carried along by the first, and culturally – and for many reasons – something of them feedback loops from the first – we are blocking the second.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *