Technology, Legitimacy & Social Justice: Fighting “the propaganda of inevitability”

The tech sector is brimful of a dangerous self-confidence that it can fix everything. But many social ills can be rightfully laid at its door and corporate experiments with Artificial Intelligence are now worrying many. What happened to our trust in tech and how can this be re-gained, asks Margaret Heffernan?

A 7 minute read.

Some years ago, I joined a conference hosted by one of Silicon Valley’s top venture capitalists. It was the kind of event I knew well, when all the CEOs funded by the VC gather together to compare notes on what they’re seeing in the market place and are provided with (or subjected to) some pearls of wisdom from those with more experience.

Once upon a time, I was one of those CEOs, now I was supposed to dispense wisdom about willful blindness. I don’t remember much about the conference beyond its opening which went something along these lines:There is nothing wrong with medicine that getting rid of doctors won’t fix.

There is nothing wrong with education that getting rid of teachers won’t fix.

There’s nothing wrong with the legal system that getting rid of lawyers won’t fix.… our host stopped just short of saying there was nothing wrong with democracy that getting rid of voters won’t fix….I sat up and paid attention. This was not the tech sector as I’d known it. All those years back in what I’ve come to think of as Internet 1.0 we had been full of curiosity, wondering what new wonders we could invent with this new tech and how we might develop it to enable, empower and even liberate everyone.

"We were idealists, we were naïve"

We hadn’t gone into it to make money – there wasn’t any in the beginning. And we hadn’t gone into it for power – there wasn’t much of that either. We were idealists, we were naïve.How things had changed. Now the full force of Big Tech was marshalling its overwhelming power against … teachers? Doctors? This was definitely different, a calculated attack on society as we know it.

Since the early days tech has become so much bigger, richer, more aggressive. It tricks kids into spending money, newly weds and parents into buying stuff they didn’t know they were buying, recruiting subjects for experiments no one knew they were part of, siphoning off data. Who knows what else can be laid against its door?

This is not the way to build trust. It is a way to build distrust.  And nowhere is that deeper than in discussions about artificial intelligence.

But first, what do I mean by trust? According to Veronica Hope-Hailey’s rich research, it is a mix of 4 ingredients

  • Benevolence: you wish the best for me, for all of us

  • Competency: you know what you’re doing

  • Consistency: you don’t change your mind or lie

  • Integrity: you act the same way when nobody’s looking

But even in its early phase, these four qualities are notably absent in corporate experiments with AI. Instead, there are a whole bunch of problems that, thanks to diligent investigators, keep being brought to light.

No consent

Earlier this year, it was revealed that Pearson, a major AI-education vendor, inserted “social-psychological interventions” into one of its commercial learning software programs to test how 9,000 students would respond. They did this without the consent or knowledge of students, parents, or teachers.115This doesn’t appear to be consistent with Pearson’s values nor does it exude integrity.

Bias

96 percent of the world’s code has been written by men. And there’s an army of scientists (if you need them) who have demonstrated that WHAT is made/ written is a reflection of those who make it.  (In art, this is deemed a virtue.) So it means that we already start in a deeply troubled, profoundly unrepresentative place.Using AI for hiring selection has been shown – by Amazon – to be ineradicably biased because, trained on historical data of overwhelmingly male employees, it selects…overwhelmingly male employees, and deselects women. After 2 years of trying to fix this, even Amazon has given up. But not all companies have. So when AI meets diversity policy, what you get is … at least NOT consistent.

Inadequate data sets

Deducing mood, psychological state, sexual orientation, intelligence, likelihood of paedophilia or terrorism through “analyzing” facial expression keeps being shown to be inaccurate, outdated, biased, and based on inaccurate datasets. This is not benevolence.Training AI to seek “criminal face” using data consisting of prisoners’ faces is not representative of crime, but rather, represents which criminals have been caught, jailed, and photographed. A high percentage of criminals are never caught or punished, so the dataset cannot adequately represent crime itself. We are back in the land of phrenology…or worse.We all know there are some scientific research programs best not pursued—and this might be one of them.

Illegality

Hiring AI used by fast food companies has been shown by Cathy O’Neill to screen out those with any hint of history of mental illness – a move specifically prohibited by the Americans with Disabilities Act. But attempts to investigate this so far have been stalled by claims that the AI is a trade secret and cannot be disclosed. One law for humans – another for machines. This isn’t integrity in action.

Socially deaf definitions of success

Attempts to use AI to allocate school places more fairly in Boston backfired completely when it turned out that nobody programming the AI had the faintest idea – let alone experience – of the way that poor families live, the timetables that working 3 jobs imposes on them and their children. Not only were results worse – they came wrapped in insult. This isn’t benevolent and it was, frankly, incompetent.These are real cases. There are more. Each one might be nit-picked apart but the key issue is this:

AI oversteps a fundamental boundary between objective analysis and moral judgment.

When such moral judgments are made, people deserve a chance to understand and to validate or to contest them. Claims of trade secrecy specifically militate against this.  Ethical issues are treated as legal and policy issues – a way of sidelining them that is completely synonymous with the way that hierarchies and bureaucracies facilitate, indeed drive, wilful blindness in organizations.AI simply amplifies both the risk and the lack of accountability to an unimaginable scale. You could say it delivers the status quo PLUS. So AI has the capacity to increase and sustain marginalization, corporate malfeasance and inequality.

All of the companies involved in developing it know this. That’s why there’s a whole roster of organizations all trying to figure out how to make AI the commercial goldrush it promises to be – while also hoping to silence fears that anything could possibly go wrong. But there are 2 difficulties with their approach:

  • Almost all of these organizations are set up, run and incorporate very large institutional interests. These are not the critics of the status quo but its beneficiaries. They start from a position that is inherently unrepresentativeWhile these talking shops assuage nervous politicians, business as usual continues with a language of inevitability: AI is coming, AI is here, you have no choice but to stand back and take it. It’s too difficult for you bumpkins to understand – so just leave it to us. Or, to quote the CEO of Axon (the company that conveniently changed its name from Taser) speaking in defence of facial recognition cameras:

“It would be both naive and counterproductive to say law enforcement shouldn’t have these new technologies. They’re going to, and I think they’re going to need them. We can’t have police in the 2020s policing with technologies from the 1990s”[1]

"'Sit down. Shut up. When I want something from you, I’ll ask’, is the tenor of so-called public consultation"

This is the language of propaganda: telling people that these new technologies are inevitable- when they aren’t – and that they’re unequivocally productive – when they aren’t – and therefore there is no need, no POINT in asking questions. ‘Sit down. Shut up. When I want something from you, I’ll ask’ is the tenor of so-called public consultation.

This is not a language that builds or invites trust. And it isn’t a problem to be solved by dishing out a few ethics classes to kids at Stanford – exactly the same step taken at Harvard Business School after the collapse of Enron.

I’d like to be, but I’m not convinced by companies that claim to take these issues seriously. Most of them are all signed up to work together, not with the public. But when Google set out to establish an ethics committee to guide its development of AI, who did it select? A Defence Department vendor, the head of a conservative thinktank and an army of AI academics every one of whom obviously has his or her credentials burnished by their association with Google.

In other words, The Advanced Technology External Advisory Council (ATEAC) consisted of a group of people wildly unlikely be bring independent thought and dissent to issues of grave civic importance. Designing an ethics committee this way suggests three things. First, they didn’t understand the problem they were trying to solve. Second, they are blind – even after the Cambridge Analytica scandal – to the dangers of the industrial/academic complex. Third, with one Brit and one Gambian but the other members six members American, this global business is far from globally conscious. No wonder it was disbanded before it began.

Now it may sound like I’m an out-and-out naysayer, a Luddite, a crazy woman who just “doesn’t get it.” But I love technology, I always have. Tech has given me a great career, some of the finest collaborators and co-creators I’ve ever worked with, some of the most fun I’ve ever had and a rich experience of exploration and invention. I owe it a lot. I want it to live up to its promise – because if it can, when it can, trust will no longer be an issue.But to get there a whole bunch of things need to happen.

  • This propaganda of inevitability – a strange brew of salesmanship, hype and propaganda – has to stop. It’s misleading, dishonest and disrespectful of consumers and citizens. It isn’t the language of benevolence but of the bully.

  • The tech sector badly needs to get a lot better at speaking plain English that anyone can understand. I don’t think this is impossible. It takes effort and empathy but if you can’t explain it, we shouldn’t buy it.

  • It needs to do this because the only way that the public will trust AI is if the public is involved in debating and deciding where its limits and boundaries are. We know now that we cannot and should not trust the leaders of major corporations to do this – not because they are specifically bad people but because the interests and perspectives of corporations and citizens are not identical. This has been true throughout history and tech, for all its exceptionalism, is just another kind of business.

On this topic, the Astronomer Royal Martin Rees said that there is no ivory tower or penthouse high or remote enough for scientists and engineers to disown accountability for the uses to which their work is put.

As companies have become huge, over mighty and sometimes appear more functional than governments, it’s tempting to think they know best – just because they know more. But let’s not forget who serves whom here. Businesses serve society; that’s their role, their function, their justification. It’s why they have – when they have – a licence to operate. And business has an interest in doing so:  because without society, there is no market to serve. Without law and order, with some semblance of shared interest and common purpose, business –  trusted business – absolutely fails. And you probably don’t relish running businesses in places or at times when that tragically occurs. So however frail it seems, business needs legitimacy and flourishes when trust is high. Where trust and legitimacy fails, business fails too.

Finally, it’s become fashionable to argue that debate and discussion are fruitless: that nobody can ever agree, debate just polarizes and anyway – most people are just too stupid. I know that in the safety and comfort of a democracy it’s easy and safe to deride the privilege we have all enjoyed for centuries. But this cynical trope is wrong. All over the world there are communities, counties, countries that have had public debates and discussions, with outcomes accepted because of processes that were open, inclusive and deemed fair. And what those examples vividly demonstrate is that people can understand issues of great complexity if given the right opportunity and information, yes they listen and do their homework, yes they frequently change their minds and yes, they frequently make informed, judicious decisions.

But when given this opportunity, it isn’t merely the outcome that matters. It is legitimacy. Where people are consulted, where they are heard, where they can listen – they can and do support decisions they understand even if they don’t agree with them. The price of legitimacy is not ad. spend or tech dominance or propaganda.

The price of legitimacy is participation: a lesson we are learning now, if we pay attention.

Let’s never forget the brilliant Dutch saying: Trust arrives on foot – but leaves on horseback. Margaret Heffernan is co-curating a project on Social Justice In Tech, supported by investment bank, Stifel. The programme is being led by Jericho Chambers and commences in May 2019. Portions of this article were presented at “Are we running out of trust?” - a special executive discussion evening hosted by Fujitsu at QEII Conference Centre in London on March 20, 2019.

Previous
Previous

The Good Work Zeitgeist: Future Intelligence Problems, Paradoxes and Possibilities

Next
Next

Talk Tax, not Trust