The Ethics of Disruption: Rights and wrongs with Artificial Intelligence

Computer says 'no.' And sometimes it says 'yes.' But what are the risks with allowing AI to make important decisions on all our behalfs. The first in our series of Ethics of Disruption podcasts looks how going back to basic rules of philosophy might help untangle modern Gordian knots. 

***

It’s now 52 years since the release of Stanley Kubrick’s film 2001: A Space Odyssey and its star, the artificial intelligence with the creepily gentle American accent, HAL 9000. In 2020, HALs are not just fanciful Hollywood dystopian stars but real and in use, day-in, day-out.The development and application of AI is causing huge divisions both inside and outside tech companies. Google isn’t alone in struggling to find an ethical compass. Whole branches of new academic research are springing up around AI and ethics. Some of what they do is neither transparent nor comprehensible, even to the interested layperson.For instance, Google has said it will not sell facial recognition services to governments, while Amazon and Microsoft both do. They have also been attacked for the algorithmic bias of their programmes, where computers propagate bias through unfair or corrupt data inputs.In response to criticism not only from campaigners and academics but also their own staff, companies have begun to self-regulate by trying to set up their own “AI ethics” initiatives that perform roles ranging from academic research — as in the case of Google-owned DeepMind’s Ethics and Society division — to formulating guidelines and convening external oversight panels. We now have a fragmented landscape of initiatives that both supporters and critics agree have not yet had demonstrable outcomes beyond igniting a debate around the topic of AI and its social implications.

Listen to the first Ethics of Disruption podcast, produced on behalf of Stifel:

In the meantime and in the real world, AI is in action in this time of COVID-19 and its use goes to the heart of the public good versus individual privacy debate. In South Korea, the government has successfully relied on mobile carrier data to track everything from patients who should be isolated, to how well people are following limited-movement edicts. South Korea’s health authorities have even been sending detailed text messages ranging from reminders about handwashing to specific information about people who have tested positive and where they are.The Guardian newspaper in the UK has reported that an example text message read, “A woman in her 60s has just tested positive. Click on the link for the places she visited before she was hospitalized.” The link directs to a list of locations the person visited before she tested positive.Might our curiosity ­– combined with boundless ambition – be the undoing of us?A number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are – a form of AI known as artificial general intelligence – Doomsday may follow. Bill Gates and Tim Berners-Lee, the founder of the World Wide Web, recognise the promise of an Artificial General Intelligence (A.G.I.) - a wish-granting genie, rubbed-up from our digital dreams – yet each has voiced concerns. Even Elon Musk warns against “summoning the demon,” envisaging “an immortal dictator from which we can never escape.” Stephen Hawking declared that an A.G.I. “could spell the end of the human race.”

***

This article previews the forthcoming discussion series The Ethics of Disruption, curated by Jericho Chambers on behalf of Investment Bank, Stifel, and its Europe CEO, Eithne O’Leary.

Listen to the first Ethics of Disruption podcast, produced on behalf of Stifel:

This podcast in the series looks into the theory and practice of the Ethics of AI and speaks to three experts. First, Professor Dominic Houlder of London Business School.  Although a professor of strategy in his day job, Houlder has recently published a book called “What Philosophy Can Teach you about being a Better Leader.” Second up is Nikolas Kairinos, an expert whose companies produce AI and the founder of Fountech.ai. Third, Professor Joanna Bryson, who is Professor of Ethics and Technology at the Hertie School in Berlin. Her research focuses on the impact of technology on human cooperation, and AI/ICT governance.  The presenter is Matthew Gwyther.Join the ConversationJericho will be curating a number of salons and discussion forums on the Ethics of Disruption and the future of technology and tech. companies, starting with the impact of AI. To get involved, please get in touch with Jericho’s Programme Director, Becky Holloway.

Matthew Gwyther

Matthew edited Management Today for 17 years and during that time won the coveted  BSME Business Magazine Editor of the year on a record five occasions. During a fifteen year career as a freelance he wrote for the Sunday Times magazine, The Independent, The Telegraph, The Observer, GQ and was a contributing editor to Business magazine. He was PPA Business Feature Writer of the Year in 2001. He has also worked on two drama serials one for Channel 4 and one for the BBC.  Before becoming a journalist he had a brief and inauspicious spell as a civil servant working at the Medical Research Council in its London Secretariat.

Matthew is the main presenter on BBC Radio 4’s In Business programme.

Matthew is also the co-author of Exposure published by Penguin in London and New York in the Autumn of 2012. It is the story of whistleblower Michael Woodford, the “Southend samurai” who left school at 16 and worked his way up to the top post of the Japanese industrial conglomerate Olympus, only to discover that his board were involved in a two billion dollar fraud.

Contact: matthew.gwyther@jerichochambers.com

https://www.linkedin.com/in/matthew-gwyther-8b043210/
Previous
Previous

The Trust Delusion: Episode Two

Next
Next

Business Life After The Virus: Pandemics and an Uncharted Future