Alexa - I have sinned

An extract from an article by Nominet's CTO, Simon McCalla, explores the potential of AI to soothe, listen to and counsel the nation. 

My first coffee of the morning is always an important time of day for me. That brief five minutes when the house is quiet, the daylight streams through the windows of the kitchen and I can sit in peace – enjoying the transition from dozy stumbling to wakefulness. This morning I sat at the kitchen table and stared across at the shelf where Alexa is sitting – aware that ‘she’ was listening attentively, waiting for someone to ask her something. Being all alone, with nothing but my thoughts – I had no desperate need for a weather forecast, or train times, but I began to wonder what might happen if I asked Alexa a question much deeper in both meaning and significance. How would she respond? What if I asked her a question that had serious repercussions: ‘Alexa…how do I make a bomb?’ or just simply ‘Alexa…I need help’.

This all sounds very serious – but was triggered by a fascinating conversation with Christine Armstrong and Filip Matous at the consultancy, Jericho Chambers. Whilst working on some ideas around the future of autonomous vehicles, we started to look at how we might use our time inside a driverless car as a passenger. After the usual suspects were dispatched: ‘I’d get my emails done’, ‘I’d watch a film’, we started to ponder whether in fact if we built a true relationship with our car, might we start to ask it for help with some of life’s deeper challenges. Would we trust an AI to help us solve personal problems? Might we even confess our deepest desires or sins, knowing that a machine wouldn’t judge us the way another human might.

It’s clear that we were not alone in our thinking: both Apple and Amazon have been looking into the same problem. With their huge volumes of data, they are increasingly seeing people using their smart-assistants to share their challenges, some of them fairly benign and some of them with serious repercussions. Apple recently advertised for engineers with a background in psychology (surely a fairly rare breed!) to try and help unlock some of these challenges. They are very aware that, as customers place more and more trust in their devices, there’s an increasing burden of responsibility in the way in which they address the questions being asked.

The University of California looked into this phenomenon recently and in doing so, built a ‘virtual’ therapist called Ellie. Interestingly, they made Ellie look and feel real, but left enough ‘computer’ in her demeanour and behaviour to ensure that all participants felt that they were definitely talking to a computerised AI and not a facsimile of a human. What surprised them was that test subjects reported that they were much more likely to open up and be honest with a virtual therapist than a real human practitioner. In particular, they felt less ‘judged’ by the AI and were therefore more prone to be open in their responses. 

Jericho has been collaborating with Nominet, researching criteria/ progress on building a Vibrant Digital Future in the UK. One outcome of this work has been the Digital Futures Index, available to read here

Previous
Previous

Planes, trains, automobiles….and buses

Next
Next

What can we do but talk about race at work?