Ethics and AI

Author: Karim Derrick

25-02-20

Ethics and AI: are we heading to a dystopian future?

There can be no denying that Artificial Intelligence (AI) has already changed the way we live today. Whether it’s predicting what we might like to buy on Amazon or creating tailored fitness programmes to put us through our paces, an AI enabled future is already here.

Last year the UK health secretary Matt Hancock announced that the government will spend £250 million on setting up an AI laboratory to boost the role of AI in the National Health Service (NHS). The government believes it has “enormous power” to improve care and save lives, as well as allowing doctors more time with patients. That’s perhaps an example of AI at its most positive.

For all the good that AI has brought and the convenience it has delivered to everyday life, not forgetting the invaluable insights it has given to businesses, its detractors voice considerable concerns. They question the ethics of machine intelligence and the impact it could have on our privacy.

At the start of the year, The New York Times published an expose on what was then a little-known tech company that had created a tool helping law enforcement agencies match photos of unknown people to their online images in order to solve crimes. At its best, a picture was painted of the global reach of social media being used to catch criminals swiftly and effectively; at its worst it suggested a significant milestone in the erosion of our online privacy.

The company is Clearview AI. The tool, a facial recognition app that goes way beyond anything that has been created before. Here’s how it works: Take a picture of someone, upload it to the app and what then appears are public photos of that person, along with links to where those photos have appeared online. Clearview claims to have scraped three billion images of people from Facebook, YouTube, Venmo (digital wallet) and millions of other websites.

Federal and state law enforcement officers across the US have used it to help solve a range of crimes, from shoplifting and fraud to murder and child exploitation.

It is the first time that AI has been used in this way. Even Google has held back from such usage because of inevitable ethical criticisms. In fact, Google, Twitter, Facebook, YouTube and Venmo have all sent cease and desist letters to Clearview, specifically taking issue with the scraping of images from their platforms.

The computer code underlying the app, which was analysed by The New York Times, includes the ability to integrate it with augmented-reality glasses. Users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

Clearview has also licensed the app to at least a handful of other companies, reported to be security companies. According to recent news reports, the business is looking to expand into at least 20 other countries.

As for the police, according to the article, Indiana State Police solved a case within 20 minutes of using the app. Two men were in a fight in a park, this resulted in one shooting the other in the stomach. The whole thing was filmed on a phone by an onlooker which meant the police were able to obtain the gunman’s face to run through the app.

Astonishingly the police found a match via social media thanks to a video previously uploaded online of the perpetrator, along with a helpful tag that gave his name. The man was subsequently arrested and charged.

Clearly, in this case the AI has been used very effectively to catch a dangerous offender, and of course, one would expect the police to use it in the spirit of the law. But as cases around the world have shown, the police are not beyond reproach. There’s nothing stopping them from using the technology for personal gain. In 2016, an Associated Press investigation found that US police officers were routinely accessing a secure database to find information on people that had nothing to do with their work. This even included stalking former partners.

For many the Clearview tool also raises questions of consent. Do we really expect our photos to be used in this way when we upload them? There are also concerns about accuracy and what happens if mistakes are made? False positives produced by the tool could put innocent people at risk of arrest or worse. Research has shown that ethnic minorities are particularly at risk.

The technology also throws up questions about AI in general and where the ethical line should be drawn.

As many readers will know, at Kennedys we’ve been longstanding and early adopters of AI, using it to redefine what innovation looks like for insurers as part of our strategy to help them use lawyers less. Ensuring that this technology is used ethically is critical.

Most recently, through combining human and machine intelligence, we have launched Kennedys IQ.

The Kennedys IQ platform pulls together multiple data points and then uses AI to uncover claims trends and best practice, as well as insight into how the client’s business is performing, what its competitors are doing, and what trends are emerging in the industry.

Not only do we want to help clients use lawyers less, we want to help our insurer clients be commercially successful. Using social media to help detect potential fraudulent claims forms part of this effort. But so too is analysing claimant behaviour in order to optimise the claim process, to reduce the time taken for insurers to make decisions about liability and to ensure claimants are compensated for their loss as quickly and as accurately as possible.

None of this is unethical; on the one hand it is cracking down on fraudsters who put the cost of insurance up for everyone else and on the other hand it provides a far better experience for those recovering their losses.

What is perhaps needed is a greater understanding by consumers around data and how that can and is being used.

Consumers generally seem to be content that Amazon monitors their buying habits and make predictions on further items for them to buy. It’s a largely accepted and mostly welcome practice.

Sadly, insurers and law firms simply don’t wield that kind of influence. Insurance and legal services are a distress purchase. Ordinarily, we turn to a lawyer only after something has gone wrong. We don’t always know when a lawyer will be able to add value to a situation. We don’t always know what type of insurance will best fit our circumstance. So, what if instead the lawyer turned to consumers in order to help them make the best of a situation or before it arises in the first place?

Professor Richard Susskind, a leading legal author, argues in his recently published book Online Courts and the Future Justice that lawyers could make much more of the knowledge and technology at their fingertips by being far more proactive in providing its service to consumers in the same way that Amazon sells books and pretty much everything else.

What’s wrong with using technology to anticipate user needs? Surely it is better for law firms or insurers to be able to predict the needs of consumers and then provide a better service when the purchase is made as a result? Would consumers be willing to sacrifice their data if it provided them with a more secure future?

It all comes back down to control. People want to feel in control of their data and of their decisions. We want to feel as though we are an active participant not an unknowing observer. We all want to benefit from AI, not be discriminated because of it.

Related news and insights