Privacy and cyber risks inherent in AI technology

29 May 2019

As organizations find more and more commercial uses for artificial intelligence (AI), they need to be conscious of data, privacy and cyber risks, according to a new report from Lloyd’s.

AI has been around for over six decades, but the number of real-world applications has increased rapidly in recent years, with uses in medicine, education, marketing and banking. The technology is now being widely used to power robots in factories, diagnose medical conditions, scan legal documents, and answer customer enquiries online. 

According to Lloyd’s, a long term shift to strong AI will mean that AI capabilities could evolve to be indistinguishable from human intelligence.

AI is still in the early stages of development, but already important ethical, privacy and cyber risks are emerging. Companies employing forms of AI could fall foul of ever tougher data protection and privacy regulation, while more specific AI regulation is in the pipeline.

Data Privacy

Data is the oxygen of AI, but personal information could be subject to data protection and privacy regulations, such as the EU’s General Data Protection Regulation (GDPR).

According to the Lloyd’s report, the GDPR is critical for AI, which relies on huge amounts of data. GDPR offers consumers and end-users a high degree of control over their data and requires entities that collect data to seek permission for how they use the information. It also gives consumers a right to an explanation for automated decisions made by AI.

AI systems capture data in order to make decisions and data is also used for profiling (e.g. for purposes of target marketing or advertising). For example, a driverless car uses AI and sensors to process images of pedestrians, while recruitment software could mine social media data; or a personal fitness device provided by an employer could gather data for insurance purposes.

According to a 2018 report from the Information Policy Centre (IPC), there is “tension” between AI and data protection laws. For example, it might be unclear that data used by an AI system would fall within the scope of data protection laws, given current definitions of personal information, the range of data captured by AI, and how it is aggregated, compared and interpreted.

Essentially, AI makes it more likely that non-personal data can be made identifiable. Increased computing power and the collection of more and more data (like biometric data or facial recognition) provides opportunities to more reliably identify individuals, the IPC said. 

In addition, information that was once considered non-personal now has the potential to be personal data, according to the IPC.

There are also likely to be issues with the lawful collection of data used by AI and its relevance to the task at hand. 

Most data protection laws require a lawful basis for collecting and processing data – GDPR specifically requires the consent of individuals. The GDPR also requires that data must be adequate, relevant and limited to the purposes agreed to be processed.

Given the volumes and diversity of data used by AI, complying with data protection laws, like the GDPR, could be challenging, according to the IPC report.

 

Bias and Transparency

Prejudices and biases – like gender and racial bias - are another emerging risk for AI. These can be unwittingly introduced by AI software developers or inherent in the data used to train AI or in decision making. According to Lloyd’s, the potential for algorithmic bias in AI systems presents new risks, since we will become increasingly reliant upon AI to identify relevant data and act upon it.

Banks and insurers, for example, are beginning to use AI and algorithms to assess risks – such as credit risk – and make decisions on whether to offer products or how much to charge.

AI is also being used to automate recruitment processes. However, there are concerns that such algorithms contain bias and could discriminate against certain groups in society.

An algorithm developed and tested by a technology company, for example, was found to be sexist and therefore unusable. The AI system had been trained on data submitted by largely male applicants over a 10-year period, resulting in the system teaching itself that male candidates were preferable.

Transparency is another issue for AI, which is often referred to as black boxes because the complexity of AI means it is difficult to explain how information is used and how decisions are made. However, data protection laws may require a high degree of transparency. 

GDPR, for example, specifically limits the use of certain automated decision making and gives consumers the right to have decisions reviewed or contested.

Sign up to our latest  News & Insights

Regulation

Today, there is little AI-specific regulation, but more proactive regulation for AI is coming, according to the Lloyd’s report. Key issues include accountability - an AI system would not be held liable for any potential damage resulting from its decisions under existing law – but also ethics and privacy.

The US, for example, recently introduced the Algorithmic Accountability Act, which would require certain companies to conduct data protection impact assessments for algorithmic decision making systems, with a particular focus on bias, discrimination, privacy and security.

The EU also recently published its Coordinated Plan on Artificial Intelligence, which, among other things, proposes that AI ethics guidelines align AI regulation with existing privacy laws. 

The EU says AI systems must have adequate data governance mechanisms, while data, systems and the AI business models need to be transparent.

Cyber Risk

AI also has some important considerations for cyber risk.

AI can automate cyber attacks and could be used to exploit human vulnerabilities and those of other AI systems, according to Lloyd’s. 

At present there is very little cybersecurity analysis of AI systems and the potential attack surface presented by such systems. However, criminals are beginning to use AI to learn how to conduct all stages of a typical attack, from reconnaissance to crafting a specific attack, it said.

An AI report published by insurer Allianz in 2018 also warned that AI-based applications will increase the vulnerability of businesses to cyber attacks and technical failure, leading to larger scale disruption and loss scenarios. 

AI could enable more serious and targeted cyber incidents to occur by lowering the cost of devising attacks, it said.

For example, an AI system could enable more sophisticated and automated social engineering, using AI to send out phishing emails and to respond to the receiver. AI bots could personalise messages and learn which types of emails or attacks work best.

On the flip side, AI can also help bolster cybersecurity systems, as it can learn and react to threats much faster than traditional methods.

For example, it is being used to identify vulnerabilities in systems, detect threats like malware, and seek out intruders in networks.

  • TALK TO AN EXPERT

  • DOWNLOAD AND SHARE

  • Sarah StephensSarah Stephens

    As part of Marsh JLT Specialty's London-based Financial Lines Group, Sarah and her team work both directly with our clients and with network colleagues and independent partners to make sense of cyber, technology, and media E&O (PI) risks and create leading edge bespoke insurance solutions in the London market.

    Prior, Sarah spent 12 years with Aon in a variety of roles. Her last role at Aon was Head of Cyber & Commercial E&O for the Europe, Middle East, and Africa (EMEA) Region, working with colleagues across business groups and clients in the region to identify, analyse, and drive awareness of cyber risks, exposures, and both insurance and non-insurance solutions.

    Previously, Sarah spent seven years with Aon’s US Cyber and Errors & Omissions practice group thinking nonstop about cyber insurance way before it was cool. Her first four years at Aon were spent in the Account Management group working with large clients and developing a keen eye for excellent client service.

    For further information or to learn more about cyber insurance, contact Sarah Stephens, Head of Cyber, on +44 (0)20 3394 0486.

  • For more articles like this, download our Cyber Decoder

    Share this article