Social Justice in Tech: Men, Women and Power

The following article is based on a roundtable conversation held at The Delaunay in London on 10 June 2019.  It is the first in a series - supported by Stifel - looking at the reputational and regulatory challenges faced by Big Tech on both sides of the Atlantic. The series will seek to explore the considerable power that such companies now possess and how that is wielded. A podcast covering some of these issues was made before the roundtable and can be listened to here.

When billionaire Blackstone co-founder Steve Schwarzman was looking for a project to headline his recent $150 million donation to Oxford University, he chose The Ethics of Artificial Intelligence and will establish a new institute. “It is clear we need to develop a robust discussion on which technologies we can implement, how fast, and when” said Mr Schwarzman to the Financial Times. “It has to be done in a very smart and very balanced way”.

Robust discussions are now the norm in the world of technology, especially around Big Tech. The FANGS - Facebook, Amazon, Netflix, Google - are all under the microscope and regulators are at the ready on both sides of the Atlantic. A new wave of regulation led by Europe is underway.  Regulators in the West have had battles over antitrust with tech firms before - IBM   during the 1960s and Microsoft in the 1990s. However, today’s giants are accused of more grievous sins than merely capturing huge rents and stifling competition. They are being held accountable for acts such as destabilising democracy through misinformation and abusing individual rights by invading privacy. More commandments are being broken and the punishments could be more severe.There is a particular anxiety with AI at the present time. As it is adopted more broadly across society and especially when it deals with human data it has the ‘black box problem’. Its workings seem arcane and unfathomable without a thorough understanding of what it's actually doing. It is seen as a mind and a law unto itself, like Hal from 2001.

If it’s going to advance towards its full potential it must be trusted – we need to know what it is doing with our data, why, and how it makes its decisions when it comes to issues that affect our lives. This can be very hard to convey – particularly as what makes AI especially useful is its ability to draw connections and make inferences which may not be obvious or may even seem counterintuitive to us.

But building trust in AI systems isn’t just about reassuring the public. Research and business will also benefit from openness which exposes bias in data or algorithms. Reports have even found that companies are sometimes holding back from deploying AI, due to fears they may face liabilities in the future, if current technology is later judged to be unfair or unethical. And there is the problem - across tech - of a marked lack of diversity. If Ai is going to be widely accepted as legitimate then it cannot be based on software produced by one narrow sector of the community. It must be produced by people who look like the society it serves.In early June Stifel and Jericho Chambers held an evening roundtable discussion with the title “Tech: men, women and social justice”. The object was to consider what next for the tech world.The discussion was opened by Margaret Heffernan, author, tech entrepreneur:“A few years ago I was asked to speak at a CEO Summit - one of those events at which venture capitalists bring all the CEOs of their companies together to learn from esteemed experts. What first took me aback was the VC who opened with a really stunning piece of rhetoric along the lines of: there is nothing wrong with education that getting rid of teachers won’t solve; there is nothing wrong with healthcare that getting rid of doctors won’t solve; and there is nothing wrong with the legal system that getting rid of lawyers won’t solve and so on. I wondered if he was going to conclude by proclaiming there’s nothing wrong with government that getting rid of government won’t solve. But he didn’t quite go that far.

“This was a breath-taking arrogance of a kind I'd never encountered in the world of tech before. But I didn’t hear or talk to anybody who thought that the rhetoric was merely rhetoric. No. Tech was great and people were the problem. And if you can just get people out of the system, then everything would be hunky dory. And this was in the pre-Facebook days. So this is, if you like, the gentler form of internet 2.0 and it hasn’t become any gentler since then.

“What is really interesting about all of that rhetoric, apart from its sheer brute force, is that there is nowhere in it any sense that anybody had responsibility for anything. Get rid of all this stuff, sweep it aside - not our problem. Brave New World.

“Since then I've talked to - or listened to - a number of chief execs from important high-tech companies, including Amazon, who will claim they are very happy to be regulated because they don’t really know where the boundaries are. This is spectacularly disingenuous, because it’s saying we are just going to go on playing with matches unless and until somebody stops us. It’s not our house that we might be burning down.“It’s as if tech owes nothing to society, even though everybody who writes code grew up in it, was educated in it, drives on roads built by it, hopes to breathe the clean air that the society might one day provide. You know, that’s not our problem. We’re just here to build amazing technology and the rest? Well, just stop us if we go too far.“If they are saying in essence ‘We can’t trust ourselves to know where the boundaries are’, why should we trust them? An example: we’re using AI now to sort out immigration visas at the Home Office in ways that nobody will explain. They say ‘Oh, it’s not decision making, it’s just streaming’. Well, you know, if you go in to a hospital and you get involved in triage, I would be very surprised if you didn’t feel that that was decision making. Streaming is triage. Triage is decision making. And it means you go to the top of the pile or the bottom of the pile.

“So AI, is taking software, which used maybe to be neutral, into an area that clearly isn’t but that is about ethical decisions. The simplest explanation of ethics is that it’s about other people. We know that there is software out there that is looking at going through resumes and throwing out resumes of people applying to fast food jobs, throwing out anything that indicates a history of mental illness. In the United States this is strictly against the American Disability Act, so you can’t do it in person. So they are doing it ‘out of person’, if you like.

“Facial recognition software is becoming a really hot topic. San Francisco decided to ban it, but the Metropolitan Police is pretty gung-ho about its worth. Even more frightening than facial recognition software, of course, is affect prediction software, which ‘I can tell from the look on your face you're about to dot, dot, dot’, and none of it with robust science behind it. We are seeing software being used to predict child protection issues - which kids are most likely to be harmed by their parents - and again, the data sets are incomplete because you only have the data set for all the parents that did hurt their kids, which means you have a totally skewed data set.

“What's really quite worrying about this is that there is a proliferation of organisations set up to try to opine on this. So the big tech companies are talking to each other while the non-profits talk to each other. There is very little overlap. How might we close that gap?

“What are the practical problems? One is that software is being rolled out that is definitely not ready for primetime. The public is being used as a test case because there is a huge competitive land grab going on. They have all bought in to the early adopter thesis and they want to own a space in the territory so that they can collect all the data that might one day, possibly, if they are really lucky, actually develop decent software. And there is the propaganda of inevitability, which is just roll over and play dead because this is coming to a town near you, there is nothing you can do about it, so just shut up and go away.“The second problem is that people making this software are not in any way representative of our society. Martha Lane Fox quite rightly startled people when she pointed out that 90% of the software in the world is being written by men. I’ve made radio programmes, television programmes, film and software and books. Your work always expresses who you are, and it can’t not do so. So this is the most spectacularly biased group you can imagine and that means it is structurally dangerous.

“And I think the third issue is that all of this is mostly being hidden under the cover of trade secrets, which is why courts can’t inquire in to it, which is why the Home Office won’t tell anybody quite how their software works because they argue that a) companies would copy them, and b) people would try to game the system. So they are from the get-go resisting any kind of scrutiny or, indeed, legal responsibility. All of these things undermine social justice and I would say corporate legitimacy. With rising inequality there are huge questions being asked about who here has a licence to operate and how do they prove it, and I think the only way that you prove it is by having organisations that even if they aren’t representative are representative in their decision making and in the kinds of people that they hire, promote and listen to.

“I don’t have any huge confidence that life is going to get a lot easier in the world in the next 10-20 years. I think that it’s going to get harder. I think that all of us are going to be held to a higher standard and as somebody who loves tech and who has had a fantastic career in technology, I would like to see that we aren’t just leading in cool new whatsits, but we are leading in cool new ways to think about what we’re doing and that, I think, is our challenge tonight”.

But, asked the chairman, wasn’t it true that questionable practice and decisions within a government department like the Home Office existed way before AI. What about Windrush, for example? “Yes,” replied Margaret. “But the potential is to accelerate and amplify the biases we saw in people before. The fact we can do bad stuff faster doesn’t strike me as a great leap forward. And you have the fact that its designers won’t tell us how it works”.

Caroline Plumb, the CEO of fintech Fluidly responded: “It’s not a question of necessary telling. A lot of these AIs work around deep learning algorithms so there is no explainability. A lot of the promise of artificial intelligence or mental intelligence is that it’s akin to an auto pilot. You’re not taking away control from the pilot who automatically has override and is responsible for flying the plane. It’s a capability enhancement not necessarily a substitution.

“I used to work at an innovation consultancy before Fintech. That taught me consumers are notoriously awful at future forecasting and they are particularly bad at seeing capability enhancement. Substitution - that’s another matter. Better, faster, cheaper. Most of the breakthroughs when it comes to technology is not driven by these but by different. Human brains don’t see the different use case that is actually there to be seen.”

Mikael Down, the executive director for assessment at the banking Standards Board is studying for a PhD in the Ethics of AI. He said: “I think the need for accountability and explainability is that we don’t have a reason to trust tech. The companies haven’t shown themselves to be trustworthy. I would take paracetamol to rid myself of a headache without really understanding how it works but simply trusting that it will do what it should. But I might think twice about volunteering my fingerprint when I use my smartphone.

“Ultimately everyone has their own thresholds for what they trust and why. But if you're going to earn the right to a social contract you have to earn the trust of the society that you serve. The way in which, for example, the Cambridge Analytica scandal unfolded and the subsequent scrutiny of the activities of the big tech companies, it’s quite clear that, as Margaret has written, trust is something that arrives on foot and leaves on horseback.  It’s something that has to be demonstrated and something that has to be earned and the tech companies so far have done a pretty bad job of doing that. Establishing the ‘Ethics Council’ and making noble statements like ‘Don’t be evil’ don’t get you there”. Mikael also pointed out the Midas problem was especially applicable when it comes to tech. “If you really don’t think extensively about design of models, factoring in a diversity of perspectives, unintended consequences will undoubtedly occur”.Peter Globokar of Stifel noted that tech encompasses so many things. “When you get down to it this conversation is all about data: where it comes from, how is it collected, and who owns it. Today it’s all about regulating data. There is nothing else to regulate other than is data lawfully recorded, in a sense. Do I know where my data goes? No. I have no idea as a consumer, nor do I actually get compensated for it either, so I have no control.

Well it does bother me, because I know that eventually today, if I wanted to control my data or my movements etc. it would be very, very hard. So, you have to effectively just accept it”.Rodney Schwartz CEO of ClearySo made the point that, while he was no expert in tech, he could think of at least two examples from his career where its intelligent use had led to outcomes that were unquestionably to the benefit of society. One was a niche lender of mortgages which helped fund those without jobs to secure loans to be able to purchase homes and the other was a IFA for gay people who had found it very hard to get life insurance. “Maybe they didn’t use fancy things like AI,’ he said, “but they did data analysis and showed a better way of doing things. Digital tools can have a positive impact in combating ignorance and prejudice”.Caroline Plumb agreed and pointed to the use of AI in medicine and the use of neural nets in the detection of breast cancer which has saved lives because they were more effective than humans acting alone. “We’ve heard a lot recently in terms of the data issue around women, that women are systematically underrepresented in data” she said. “So you might say that actually cancer is cancer and the data sets are true, but it is fundamentally not the case either. Those data sets are just as bad as any other data sets. Fundamentally all data is biased, and you have to recognise that bias and the outcome of it. All I'm saying is that to throw it away altogether because you can’t explain it, or to throw it away altogether because it might be biased is also massively flawed because it gets you away from the fact that it still allows you to make decisions and judgement in the right places and you can recognise that bias and actively look to supplement your data”.

Matthew Gwyther, the chair, wondered if judgement was making a comeback. He pointed to the recent news that Google in a belated attempt to clean up You Tube of undesirable content had banned the Leni Riefenstahl film ‘Triumph of the Will’ about the rise of the Nazis in Germany thus depriving not only would-be scholars of the 1930s the chance to watch and evaluate it but also admiring modern day neo Nazi groups. It was surely an important historical document and the low level of unintelligent censorship that Google and Facebook undertake was simply not good enough.

Eithne O’Leary, President of SNEL and head of Stifel, London, agreed that judgment was critical and judgment is helped sometimes by a regulatory framework to assist it. “I had a conversation on a compliance topic earlier on this evening with some of my staff, where we were trying to extract the meaning of compliance for a particular case, which was to protect the people who were serving the clients. That’s what it’s there for in its purest sense. To some degree tech is missing that, because it doesn’t have a function that protects the people who are on the cutting edge. Now, if you're on the cutting edge of gaming technology that’s one thing.  If you're on the cutting edge of deciding what sort of a killing machine makes what sort of a decision in a defence context then that is a lot of pressure around which there is no support, as far as I can see. Compliance is actually damned useful.

“I think an important question for people in tech and banking to ask themselves is simply “because you can do it, should you do it?”. In banking we have a whole industry now asking should you do it. In tech I think that is largely yet to appear. They do things simply because they can. It’s not to say at all that outcomes are all awful. Far from it. There are many good decisions but who is asking that vital question?”.Chi Onwurah, the MP for Newcastle Upon Tyne Central and Shadow and Shadow Minister for Industrial Strategy was once employed at OFCOM and has broad experience of regulation. She said: “Financial services are transactions which are happening multiple times per second/minute/hour. They are not of the same order of scale of software engineering and telecommunications transactions. These are happening millions of times per second and they are automated and self-learning and so I think that your framework suggests that there is somebody to take a decision, whereas actually when you write a piece of code, that lives and takes decisions continuously.“Before regulation you must agree on definitions. If you look at advertising, if you look in the 19th Century, you know, claims were not being verified. The ways in which flour to soap or whatever, was being advertised had no regulatory framework. That built up over years but it wasn’t built defining companies as being this or that, it’s saying what are the fights of consumers or citizens to have an honest advert and what are the responsibilities.  I think that you start off with data ownership and you add transparency around algorithms and you add more better regulation in terms of monopoly. You define your markets properly.“If we do something that says ‘Ohh, we’re going to take down Facebook’, along the lines of the way the US ‘took down’ AT&T it will just come back and didn’t sort out the problems in competition regulation. It did improve them for some time, but it didn’t sort it out in the long term.  I think that we need to start off with a regulatory framework. Facebook has monopolies or significant market power in at least a couple of markets such as advertising and also data.  It has a significant market power in personal data, and that is something that needs to be addressed”.The panel also discussed the implications of a recent study by the AI Now Institute in New York which found that 80% of AI professors are male and that just 15% of AI researchers at Facebook are female. At Google - the home of the Edward Damore row - that number is 10%. Just 2.5% of the latter’s workforce is black. Check Warner of Diversity VC and Seraphim Capital said: “If you read Emily Chang’s book ‘Brotopia’ some of the answers to this are hinted at. It goes back to the 70s in the US and shows that women and men entered technical fields and computer science degrees at pretty much the same exact rate, 50/50. Over time this sort of bro culture developed and by the 90s and 2000s that then became much more male dominated. Because women were the early programmers and typists they tended to be the ones that were computationally much more literate than the men. So we cannot write this off as some sort of ‘Oh, men just tend to have more affinity towards coding and engineering than women’. There are so many factors that feed in to that from a very, very young age. So, I think that it’s kind of incumbent upon all of us not to sort of just accept that as a status quo”.Rodney Schwartz added: “Anybody who has looked at venture capital returns in the UK knows how poor they are. And that may be because they are all male. A long time ago I worked for Lehman Brothers. The guy who ran research in the United States, since deceased, was a man called Jack Rifkin. This was at a time when there were hardly any analysts who were women. He saw that there was sexual discrimination in the market and found the best analysts available for the right price were women and recruited many. Lehman’s research department went from 13th ranked to number one in the country under his tutelage and there is a Harvard Business School case written about it.  So I am hoping that more diverse VC firms will emerge and embarrass the industry the same way Jack did”.The issue of size, scale and competition was also discussed. Several people questioned whether, if Whatsapp and YouTube had been operating in the sector of steel manufacture, their take-overs by giants would have been permitted. There can be no denying that the “Techlash” is now in full flow. The unique, modern nature of Big Tech’s power means that it’s possible reform via regulation may go further than it did in the United States with the Robber Barons. There are those who believe that merely considering prices and market shares is far too simplistic - especially because technology is often free to the user and is always changing the shape of the market.Tech has stolen a march on the regulators. It now looks naive that the UK’s Office of Fair Trading was so relaxed about Facebook’s Instagram purchase that it saw Instagram as a “camera and photo-editing app”, rather than a social network, and thus most unlikely ever to be “attractive to advertisers on a stand-alone basis”. That must have amused Mark Zuckerberg.The fact that four of the five most valuable publicly traded firms in the world are technology companies, with a combined market value of $3trn, has given Big Tech immense muscle power. So do the massive revenues which most of you turn into profits. But the fact that all the numbers associated with the FANGS are huge - with the exception of their tax bills - is one reason they have so many enemies.With power must come an acceptance of responsibility. Having benignly neglected these companies for years, democratic governments across the globe are now producing a multitude of policies to regulate them. The risk is that the flurry of policy making will overcorrect and do more harm than good, not least by unintentionally stifling innovation and competition. It would potentially increase the barriers to entry for energetic and imaginative startups which would be highly undesirable.There are at least four separate regulatory policy issues that need to be addressed:

  1. Privacy

  2. Market power

  3. Free speech and censorship (including inappropriate content)

  4. Plus national security and law enforcement

These will all require careful consideration and must therefore be properly targeted and carefully constructed to minimise the risk of counterproductive outcomes.These are the themes we will discuss in the next roundtable.

 ***

The next Social Justice in Tech roundtable - convened on behalf of Stifel and CEO Eithne O’Leary - is planned for September 2019. Further exploring themes of “power”,It will be accompanied by new articles and broadcast content. If you would like to contribute, please get in touch.Attendees

  • Mikael Down, Executive Director for Assessment, Banking Standards Board

  • David Gentle, Director of Strategy and Foresight, Fujitsu

  • Peter Globokar, Managing Director, Stifel

  • Margaret Heffernan, Leadership Thinker, Author, CEO

  • Eithne O’Leary, President of SNEL, Head of London, Stifel

  • Chi Onwurah, MP, Newcastle upon Tyne Central

  • Caroline Plumb OBE, CEO, Fluidly

  • Rodney Schwartz, CEO, ClearlySo

  • Sarah Turner, Co-founder and CEO, Angel Academe

  • Check Warner, Co-founder & CEO, Diversity VC and Venture Advisor, Seraphim Capital

  • Lina Wenner, Principal, firstminute Capital

  • Devika Wood, Co-Founder & former Chief Development Officer, Vida

Matthew Gwyther

Matthew edited Management Today for 17 years and during that time won the coveted  BSME Business Magazine Editor of the year on a record five occasions. During a fifteen year career as a freelance he wrote for the Sunday Times magazine, The Independent, The Telegraph, The Observer, GQ and was a contributing editor to Business magazine. He was PPA Business Feature Writer of the Year in 2001. He has also worked on two drama serials one for Channel 4 and one for the BBC.  Before becoming a journalist he had a brief and inauspicious spell as a civil servant working at the Medical Research Council in its London Secretariat.

Matthew is the main presenter on BBC Radio 4’s In Business programme.

Matthew is also the co-author of Exposure published by Penguin in London and New York in the Autumn of 2012. It is the story of whistleblower Michael Woodford, the “Southend samurai” who left school at 16 and worked his way up to the top post of the Japanese industrial conglomerate Olympus, only to discover that his board were involved in a two billion dollar fraud.

Contact: matthew.gwyther@jerichochambers.com

https://www.linkedin.com/in/matthew-gwyther-8b043210/
Previous
Previous

A Good Society Needs Good Work: Fresh thinking on Menopause, Race and LGBTQ+ Identities at Work

Next
Next

Frameworks to Build a Better Britain: Serving the needs of the Northern Irish built environment - Belfast