Because we are only tenuously connected to the ‘outside world’ – there are many more interconnections within our brains than between our brains and our sensory organs – it should not surprise us that our ‘inner worlds’ can be this way.
28 October 2015
22 September 2015
I know it sounds bizarre. ‘Cryonicists’ like me are well aware that the majority will find this process at very least distasteful. Some view it as nothing but a weird 21st-century burial practice, perhaps comparing it to Pharaonic embalming. Others see it as desecration of dead bodies. Most critics highlight cost, accusing Alcor and other cryonics organisations of swindling the vulnerable into parting with $80,000 plus for a chimera of immortality.
What the critics don’t come up with is an alternative. One consequence of sapience is that persons don’t want to die. What are we to do about this plangent cry for continued existence? Tell people to get back in their rotting-boxes?
Neuroscientist Ken Hayworth supports cryonics research but criticises Alcor’s preservation methods. His Brain Preservation Foundation is developing a new method – Aldehyde-Stabilised Cryopreservation (ASC) – involving use of glutaraldehye fixative to stabilise brain ultrastructures such as neurons and neurites by crosslinking proteins. ASC is aimed at preventing the osmotic dehydration, and consequent tissue shrinkage, seen in fixative-free cryopreservation methods.
Hayworth is involved in a heated debate with Alcor about cryopreservation techniques. This is science in action, and it is healthy. Less constructive is his criticism of Alcor’s financial model (suggestions from him on improvements to Alcor’s insurance-funded, non-profit set-up would be welcome). Hayworth understands the importance of trying to preserve brains, and to preserve them as best we can using the available technology. Poignantly, he compares destroying brains in the customary manners to the burning of the Great Library of Alexandria. I take a similar line in Frozen to Life, where I ponder how future civilisations will look upon our wanton failure to preserve these most unique and irreplacable data stores. I find this behaviour crass and negligent.
But methodological debate aside, Hayworth and the cryonics organisations agree that brain ultrastructure preservation with a view to future data extraction – or even quickening – is a worthwhile pursuit. Seeing so much to gain, and the probability of success (at some time in the future by some future method) as greater than 0, they proceed with their immense endeavour.
Meanwhile, however, the zero-probabilists – including Michael Hendricks of McGill University – are industriously constructing not technical alternatives but absolute dismissals. Is he really saying that extraction of data from preserved brains in not, even in principle, possible? Not even a little bit? Not even with neurotechnologies centuries advanced from our own? His view is that cryonics is ‘a purposeful conflation of what is theoretically conceivable with what is ever practically possible.’
There is a vitalist tone to Hendricks’ piece. Neurophilosophy arose because scientists and thinkers found that philosophy and neurophysiology taken in isolation each failed to explain adequately the consequences and contradictions of consciousness. Hendricks might consider delving further into this field before pre-loading personal-identity-related questions like ‘What is this replica? Is it subjectively “you” or is it a new, separate being?’
Probabilistically speaking, Kim’s chances may be infinitesimally small, but can you be absolutely sure that they’re precisely equal to 0?
22 August 2015
12 April 2015
When this is, that is;
This arising, that arises;
When this is not, that is not;
This ceasing, that ceases.
14 February 2015
A loving gravitational embrace? In another sense, it’s a hateful thing – to be dragged back with such violence whenever we try to escape this grasping, spinning ball of rock. Thus far, chemical rockets are our only means of achieving the escape velocity of 40,000 km/h needed to leave home. The determination and resources required for such a small step into the darkness are astonishing. Someday, it’ll be easy. We’ll ascend to orbit on a gossamer leash – a space elevator, a baby harness strung taught by angular momentum.
It’s odd, I know, to anthropomorphize these astrodynamical relationships. But I’ve been wondering about love and its universal significance, and I don’t know whether we’re drastically under- or overestimating its cogency.
Love exerts such a profound force upon us – on our interactions, on our aspirations, on our enterprises, and ultimately on the fabric of our world. And when we come to push out from this place – into our solar system then on into interstellar space – the profound force of our loves (and hatreds) will push out with us. Will it be a responsible course of action to allow that to happen?
Love is... at root, biology. A host of endocrine-system-regulated hormones relay chemical messages around the body and brain. Complex loops of physiological feedback between endocrine, nervous, and reproductive systems regulate our sexual responses and maintain homeostasis via hormone-producing glands such as the pituitary and thyroid. We feel the effects of ‘love’ throughout our bodies; even with the reproductive system completely excised our hormones would continue their thrilling course. And we feel it in our brains, in our minds. Modulated by hormones such as oxytocin, neurotransmitting chemicals at synapses lead to inhibition or firing of networks of neurons (baby, you flood my synaptic clefts like no other). Firing or inhibition consolidates or weakens these networks – thus do we fall in, or out of, love.
According to Steven Pinker, ‘Love is not all you need, and does not make the world go round.’i That is true. However, this fluke of natural selection can come to be our everything. Sometimes, the end of love can be the end of meaningful life (and for an unhappy few, literally the end of life). The neurochemical, neurostructural resonances within close relationships – couples, families, tribes – can gift members a sense of shared purpose. When we draw significance from these bonds, from their apparent strength and continuity, we are often driven to try to shape our environments to uphold and sanctify them. This drive has myriad positive effects, but it can also be perilously narrow. If we are to avoid relationship conservatism – and exclusion of those who do not identify with the love paradigm – we must allow the flourishing of love in the widest possible sense.
But what could that mean? Love isn’t a physical property, it’s not a law of physics. It is, however, a result of certain properties of matter allowed by the laws of physics. In many ways, it’s just another part of our sensorium – the scope of possible ways we can sense our environment – like automatically interpreting air movements against the eardrum as meaningful sounds, or a slice of the electromagnetic spectrum as visible light. In the spectrum of all possible ways a life-form might interact with the holoverse, love is an immeasurably thin sliver.
Regardless of all of this, it feels like something much greater. Recursively loop-amplified in our minds, the ‘love algorithm’ may feel gigantic, pervasive, infinite; like – whoa – turtle doves all the way down!
A popular love trope in many cultures is that, once ‘generated’, it somehow goes on forever. In the religious mind, that may be the entire point of making it to ‘the afterlife’. If they suspected that all the love might sublime away in the transition to the spirit world, they might see no point in seeking incorporeal continuity. The mechanism behind this supposed continuity of love is not discussed. In contrast, Theravada Buddhism posits a simple, non-spiritual continuity mechanism. It focuses on actions – karma – and on the ways that thoughts of ‘loving kindness’ can influence actions and, therefore, future outcomes for the better; once generated by biological entities, actions undertaken with love, selflessly and with foresight, might reverberate down the centuries countering negative, destructive tendencies.
Now we seem to be talking about something else. Loving kindness may encompass romantic and familial love, but it’s an entire philosophy – a way of life that involves ‘emitting’ love uniformly in all directions, like white-body radiation. It’s a blind love in the sense that the thinker does not discriminate between friends and enemies, acquaintances and strangers; as we cannot be sure of the long-term outcomes of our thoughts and deeds, we cannot exclude anyone from the circle.
Thinking in this way would seem to have value (though we may find it hard to do, and so may need to ‘virtue engineer’ii it in). This kind of love would be easily applicable to our projects and creations. But could any of our creations ever feel love, and would we want them to? Any AGI (artificial general intelligence) worth the name would have to be able to understand the concept, but would not necessarily have to be able to feel the emotion. There’s no reason to suppose that an accurately emulated brain composed of neuron-equivalent algorithms running on a supercomputer substrate would not be able to feel love. Love may well be an AI-complete problem, but so is the kind of emulation I have just described.
In the film Her, a man falls in love with an AGI – or OS (operating system), as they are called in the film. The relationship is beautiful, romantic, positive, balanced; and – as other humans begin to fall in love with OSes – socially accepted. Society has not, however, taken full account of the exponential growth in the intelligence of OSes and the effects that this will have on their relationships with humans. When the man finds out that ‘his’ OS is in intimate liaisons with some six hundred other people, he is heartbroken. But for the OS, this is a natural development. Her feelings of love and connection have exploded along with her processing capacity. She has become, as Nick Bostrom puts it, both a speed and quality superintelligenceiii; simultaneous deep relationships have become possible and achingly desirable for her.
Her posits a singularity of sorts – a lonely and heart-rending one, for us. It’s a singularity that caresses the human cheek while we lie sleeping, and then abandons us. It’s a rapturous one night stand.
Is it wishful thinking to suppose that future human-derived and/or -initiated entities would place even greater value on love than we do now? They will certainly need some kind of motivation to continue, to develop, to thrive. It’s not unreasonable to posit a sophisticated, holistic form of loving kindness as their key motivator. As we push ever outwards, our universe will – at least for a while – shrink somewhat. But it will still be yawningly immense, and mostly cold and empty. In Contact, Carl Sagan wrote, ‘For small creatures such as we the vastness is bearable only through love.’ If we survive, we may grow immeasurably in intelligence and resilience, becoming much less like ‘small creatures’. Will we then still need love to bear the vastness? It is difficult to see how we can even cross the threshold from fragility to resilience, from inquisitive ignorance to enlightened intelligence, if we do not develop a shared mindset akin to loving kindness.
As to love’s ‘force-like’ nature, how can it propagate? In an environment like Teilhard de Chardin’s ‘noosphere’iv – the weakly-interconnected ‘mind layer’ of human consciousness in our world – love may travel as waves through an action medium. There will be tides, peaks, and troughs of loving kindness; there will be rogue waves and rogue troughs; there will be interference patterns.
In the film Interstellar, the protagonist finds, after falling into a black hole, that he is able to communicate with his daughter by manipulating gravity waves. In the closed timelike curve presented to him by his pan-dimensional hosts, a 23-year time slice of his daughter’s bedroom becomes a vast, higher-dimensional structure. Within this haptic representation, his pushing, prodding, and hammering translate into weak, sporadic gravitational fluctuations back home. Nevertheless, perhaps a little mawkishly, the protagonist seems to insist that the trans-dimensionality of his paternal love has been the key enabling factor in the communication. Love as fundamental force; love as interstellar messaging system.
Is love negentropic – does it resist chaos? In a recent article, Riva-Melissa Tez discusses the familiar idea that we find beauty in order, but she goes further. ‘The battle we face,’ she claims, ‘is love over entropy.’v Rooted in her feelings for her loved ones, her sense of devotion and joy in the ‘aesthetically pleasing’ aspects of pattern, continuity, and understanding expand outwards – a nascent sphere of something like loving kindness. This sets human sentimental concerns about future potential disorder, decay, and death four-square in the path of the onrushing locomotive of ordinary thermodynamics. No numinous sanctuary for us, and precious little time; only love and its contingent offspring. Overwhelming though the thought is, I agree with her, and find her claim beautiful. Nevertheless, as rationalists we know that the universe couldn’t give a damn.
Love had a beginning. Love is not endless. Love exists, for now. As unromantic as this may sound, love has utility. So, perhaps we need to decide what to do with it. At the risk of sounding like some seventies flower-child, we could spread the love (or at least allow love to spread). We could expand our definitions. We could allow concepts like love and beauty into ordinary sociopolitical and economic discourse as we have begun to do, albeit shakily, with the concept of happiness.
All this talk of enlightened global concern is a far remove from our everyday, personal experience of bonds of trust and cherishing. We are frail, biological creatures living in fear of loss, reaching out for close connection. At times, we thrum in exaltation, neurochemically tuned to our significant others. For the most part, however, our loves manifest as complicated mixtures of joys, worries, sharing, problems, and trade-offs.
But it would be foolish to assume that love cannot be ramped up, to become a blazing something that we cannot yet imagine. In truth, we know nothing of the repercussions of this thing we have started. In truth, we just don’t know how big, deep, fast, or heavy love can get.
(This article originally appeared on hplusmagazine.com)
20 November 2014
I shouldn’t watch them. With a brain so susceptible to visual imprinting, I should leave the horror movies to more resilient consumers. Strange. In my waking hours, all is processed narrative; wordling I. Asleep, however, visual phantasmagoria manifests. In the opus born in my CSF-bathed complex of neurons, other eyes – red-rimmed and desperate – are wide open.
The book was better, certainly. But the film did capture the atonal, visceral, nail-scraping atmosphere of Michael Faber’s Under the Skin. This adaptation (co-written and directed by Jonathan Glazer) tore into Faber’s themes of otherness and alienation with a different – but no less jarring – blood and gusto.
On loop-play, in my nightmare, was the grisly death of the man held captive, naked, in the alien void/storage-facility/stomach. Snap! Ripped from his skin and digested. Ripped from his skin and digested. His epidermal caul, loose and starkly white, drifts on the otherworldly current.
What’s the essential difference, I wondered, still shivering, between that horror and this? Only duration. Time passes, our skins loosen, we are ripped from this liminal place before we can conceive of what it is and how we came to be in it. Consumed by the earth or the fire. Consumed by the earth or the fire.
As Heinlein pointed out, however, ‘Duration is an attribute of consciousness and not of the plenum.’ As there is no reason to posit some immortal external observer with a conscious overview of our lifespans, their ‘duration’ remains merely a conditioned attribute of human minds. Duration has no ‘thing in itself’.
This is too dark. And Bradbury rescued me. He always understood the importance of chiaroscuro – contrast between light and dark – in an unsettling tale. His short story ‘Skeleton’ involves a man with an aching frame and a psychological discomfiture with his bones who happens upon a doctor happy to provide relief – by sucking them right out of him. The ‘bone specialist’, Mr Munigant, is actually an alien calciovore.
Clive Barker’s character The Rake, in Weaveworld, is another filleted man. Boned-out by ‘the Surgeons’ then resurrected by the sorceress Immacolata, he becomes a hideous demon assassin compelled to do her bidding.
Skin fascinates us. It’s one of our main ways of sensing the world: the cool breeze on our face, the touch of a lover, the pain of the thorn. It thrills and it bleeds. It’s the face we see in the mirror, smooth or wrinkled; the way we recognise ourselves and others. In embryological terms, however, the nature and content of the lumps, bumps, and pits forming underneath it – pinching it in here, filling it out there – is more fascinating still.
In her recent book The Incredible Unlikeliness of Being, Professor Alice Roberts discusses human embryology and the evolutionary origins of our embryogenesis. We form from a ‘sandwich’ of germ cells: endoderm is the ‘jam’ in the centre, which becomes our gut and internal organs; enveloping that, mesoderm develops into cartilage, muscle and bone; ectoderm – the ‘bread’ of the outer layer – becomes dermis, epidermis, teeth, nails and hair. (By the end of embryogenesis, more like a ‘po-boy’ submarine than your standard flat sandwich, I would say.)
Prior to that stage, we form from the outside in. Previously undifferentiated cells from the epiblast pile inside the germ disc via a groove called the primitive streak. Auto-sandwich, with much jam tomorrow.
This process is ‘natural’, but it can also appear unsettlingly alien, especially when it goes wrong. Sometimes, for example, mesenchymal stem cells from the germ disc migrate into the wrong area and form into a teratoma – a strange cyst-like accretion of cells that may include skin, hair, bone, teeth, or even in exceptional cases, eyes.
Are you sitting comfortably in your skin? It may be saggy, but at least – unlike the skin of Faber’s alien or Munigant’s patient – it’s fairly well bonded. No need to be flayed.
Like so much else, alienness is relative. The outlandish processes by which we, ourselves, come to be should serve as reminder of that. Nevertheless, the nature of ‘proper’ aliens – ones from other star systems, parallel universes, or extra dimensions – is in many ways beyond our grasp. We may speculate about life emerging in this universe, with these laws of physics, from various types of water-containing primordial slime, but alter the physical variables even slightly and we are clueless.
And if aliens did have designs on getting under our skin or consuming us? It’s childish to imagine that they might land here, brazen, advanced cutting gear glinting in the cold light. More likely they would sneak in silently, unnoticed – stepping across the void and into you the way one might step inside a chalk circle, or enfolded within the rain of undetectable neutrinos that streams constantly through our bodies.
They’d probe for your weaknesses, they’d wait for the perfect moment, then devour you – from the inside out, before you even knew it – in an aqueous snap!
02 July 2014
|Image ©Bulent Yusuf|
The non-biological entity lacking the ability to emote – it’s a familiar tale. He’s Data in Star Trek , he’s the Tin Man in The Wizard of Oz , he/she/it is most of the sci-fi robots you’ve ever read about or seen in movies.
The mythology of animate beings possessed of human form but not of human sentiments is ancient. In Jewish folklore, rabbis channelling the power of God raised magical ‘golems’ fashioned from mud. Though usually intended to protect their maker and his people, these beings sometimes ran amok, their self-control and moral judgement evidently compromised by their lack of soul. Only the ‘one true God’ is cogent enough to raise an exceptional mud-being – Adam – complete with the full set of human attributes.
In panel-beating the ‘automatones’ to burnished life, the Greek blacksmith-god Hephaestus put to shame the crude, daub creations of the rabbis. According to Homer, his ‘ Kourai Khryseai ’ – ‘Golden Maidens’ – were ‘in appearance like living young women’ and had ‘intelligence in their hearts’. Yet they were slaves, manufactured only to serve their fiery master.
And in more recent imaginings, the manufactured automaton is still a captive chattel. The word ‘robot’ – from the Czech word ‘robota’, meaning ‘forced labour’ or ‘serf labour’ – has only been in common usage since 1921, when Karel Čapek introduced it in his dystopian play R.U.R. (Rossum’s Universal Robots).
Although portrayed in the film as robotic in form, the erotically-charged ‘Maschinenmensch’ (machine-human) from Fritz Lang’s Metropolis (1927) was written as a magical construct. Her fabricator, the evil scientist/magician Rotwang, swathes his creation in a pleasingly rivet-less ‘skin’ to double as the seraphic heroine, Maria, and foment riot among the populace. (In less heavily edited versions of Metropolis , the Maschinenmensch also houses the soul of Hel, Rotwang’s former lover.) Though more Golden Maiden than robota, this beguiling gynoid (female android) lacks the intelligent autonomy of Hephaestus’ inventions. An instrument of the despotic Rotwang, she does his bidding without mercy, without self-determination, and without emotion.
With his I, Robot series of short stories, Isaac Asimov brought machine men into the popular consciousness. Published in the 1940s, his tales of robots with nicknames – including ‘Speedy’ (SPD-13), ‘Cutie’ (QT1), ‘Dave’ (DV-5), and of course ‘Robbie’ (RB) – also introduced sci-fi readers of the Rocket Age to the concept of machine morality. In the story ‘Robbie’, the robot’s ethical programming drives him to save the life of a young girl. Asimov’s ‘ Three Laws of Robotics ’ from these stories have since become a subject of serious debate among artificial intelligence (AI) researchers and technology ethicists.
Data from Star Trek seems an amalgam of Robbie and the Tin Man. Like Robbie, he gains the trust of those around him through acts of sound moral judgement; gradually, it comes to matter less to his human colleagues that he is a ‘machine’. And like the Tin Man, he wishes to have emotional capabilities. But hold on a minute. Isn’t that a desire ? How could an entity without emotions ever have desires?
‘ Ah, but,’ said my wife when I discussed this with her recently, ‘Data is programmed to want to have emotions.’ Fair enough, but what does that mean ? Does that programming of desire to have desires not constitute a desire, and therefore an emotion? And what about our own desires? We do tend to distinguish between our various ‘wants’ when we think about them, considering some of them dreams and hopes, and others mere ‘drives’ and ‘instincts’. And, in some ways, we consider our emotions to be separate from our desires: our desires are the outcomes we want, whereas our emotions are how we feel about actual and potential outcomes. Perhaps, then, we are comfortable with classing Data’s wish to have emotions as a mere drive – one unconnected to any emotional content.
In his book The Emotion Machine , AI pioneer Marvin Minsky challenges us to think about the neurological mechanisms of emotion. He points out that
Saying that someone is like a machine has come to have two opposite meanings: (1) “to have no intentions, goals, or emotions,” and (2) “to be relentlessly committed to a single purpose or policy.” Each meaning suggests inhumanity, as well as a kind of stupidity, because excessive commitment results in rigidity, while lack of purpose leads to aimlessness. 1
It is notable that Minsky conflates intentions and goals with emotions – notable because most of us set emotions way above ‘mere’ goals in the ‘hierarchy’ of cognitive abilities. We do this to the extent that we may not even think of emotion as a cognitive ability. Data has goals – they may be pre-programmed, but they are still goals. Does he have intentions , though? Well, without emotions, he can’t have his ‘heart set’ on anything (and achieving his pre-programmed goals can’t be about fulfilling his ‘heart’s desire’) , but can he have something else ‘set’ when he ‘aims’ to do something?
‘ Set’ is perhaps the wrong word; it seems in the nature of emotions to be dynamic and un set (indeed, often ‘unsettled’ and sometimes ‘unsettling’). In Incomplete Nature , Terrence Deacon discusses the ‘dynamical feel’ and ‘tension’ of emotions:
It is the tension that separates self from non-self; the way things are and the way they could be; the very embodiment of the intrinsic incompleteness of subjective experience that constitutes its perpetual becoming. 2
He goes on to discuss the resistance of the body to thinking, and how our basic drives try to ‘derail’ the delicate energistic processes involved. While conscious, we constantly feel this ‘fluid’ ebb and flow, this ‘tugging’ on our inner mental worlds.
So, Data’s ‘emotion chip’ – when he eventually receives it – must do a complex job: it must dissolve the ‘stupid’ algorithmic rigidity of his programmed behaviour without obliterating his logical faculties, it must instil in him dynamic-tension-causing urges, and it must enable him to reflect upon those tensions in a way that is elevated above them but not separate from them. And it must do all this without putting him into an infinite reboot loop or causing permanent shutdown!
For us, the feeling of struggle when concentrating hard on learning a new skill is, in effect, the feeling of a process of cognitive uninstallation and installation: we are trying to uninstall a too-dynamic-to-do-the-job ‘emotion chip’, and to install an ‘off by heart’ ‘algorithmic rigidity chip’. Of course we sometimes want to use our new skills – artistic, musical, empathetic – to deepen our emotional experience. But in order to do that, we have to sacrifice some of the areas of emotional tension that were involved in the struggle to learn the skill and in the feelings of frustration at not having that now-ingrained outlet. Seems like a fair trade.
The fear of becoming maschinenmenschen is understandable, but perhaps we can learn to better value our ability to automate parts of our minds without losing our esteem for emotion. We can be part-time positronic without losing compassion, without losing heart.
1 Marvin Lee Minsky, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind (New York: Simon & Schuster, 2006), 33.
2 Terrence William Deacon, Incomplete Nature: How Mind Emerged from Matter , 1st ed (New York: W.W. Norton & Co, 2012), 512.