Artificial Intelligence – new technology but the same old privacy risks?

hitesh-choudhary-t1PaIbMTJIM-unsplash
Related expertise
Specialists
Special Counsel
Share

The recent hype around new artificial intelligence (AI) technology like ChatGPT has caused a hive of activity and discussion about the benefits and burdens of such technology in the workplace.

What this detracts from is the ever-growing presence of AI technology that is already present within organisations—it just may not have attracted the attention that ChatGPT has.

AI poses significant challenges, and even threats, to privacy due to its reliance on the information with which it is supplied. The advent of such technology give cause to question what existing measures we have in place to safeguard personal information, as well as that generated and stored by AI technologies already in use.

Identifying artificial intelligence already in your organisation

The first step to managing the risks to privacy created by AI is identifying what technology your organisation already uses that might have privacy impacts. In many cases, the fact that it involves AI may not have been considered.

The OECD defines an AI system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”1

Some key examples of AI and machine learning systems already in use in organisations include HR development tools, health diagnostics, web-based chat bots, employee productivity assessment tools, and various forms of large language models. Large language models, an example being ChatGPT, are those which retain a large amount of data and express statistical relationships between words and contexts into its ‘output’.

It may come as a surprise to many organisations that they are already using technologies that involve some element of AI or machine learning.

What privacy impacts does artificial intelligence have?

AI poses multiple potential impacts and risks to privacy, not all of them directly involving personal information but some indirectly causing problems with the way personal information is used to make decisions about individuals.

Some of the significant considerations with using technology involving AI include:

  • Creating additional personal information: relying on AI to make decisions with and draw connections between data involving personal information can produce even more personal information than was originally input. This can, among other things, result in “data proxying” where although something is not expressly known about an individual, it can be implied or imputed based on other information known about the individual. 
  • Inaccuracy: AI relies on the data it’s given – if some information is missing or inaccessible to the AI system, the results will be skewed.  When that’s in the context of personal information, using inaccurate personal information can constitute a breach of the Privacy Act.  
  • Bias and discrimination: this can be inbuilt into systems when the information put into an AI system is inherently biased (e.g. largely about one particular ethnic group), meaning that the outputs will perpetuate that bias. If bias and discrimination are not identified, an AI system will continue to create further inaccuracy and discrimination within its dataset. Organisations may struggle to justify decisions made using AI technology if they remain unaware of any bias or discrimination baked into the system.
  • Lack of transparency: under privacy law, individuals must know the purposes for which their personal information will be used. If organisations can’t account for how an AI system makes decisions about individuals or how their information generated a specific outcome, consumers may have issues trusting organisations to store their information. Trust is a fundamental part of privacy protection, and once undermined can be nearly impossible to regain.
  • Data breaches: storing large quantities of personal information in AI systems is a key vulnerability for large scale data breaches and cyber incidents. 
Managing privacy risks in the age of new technologies and artificial intelligence
 
Internationally regulators are considering AI and the need to regulation around its use.  Until such regulation reaches New Zealand, legal obligations governing the uses of AI are unclear. What is clear, however, is the continuing need to maintain vigilance and protection around the personal information that AI technologies collect, store, use, and generate.
 
That means ensuring that your organisation has clear guidelines around the collection, use, storage, and disclosure of personal information. This should include assessing why and how your organisation is collecting personal information, training for your personnel about the importance of privacy protection, clear reporting policies when staff become aware of potential data breaches, and reasonable security safeguards to ensure the protection of personal information.
 
Additionally, when looking to implement new technologies—especially those involving AI of any kind—organisations should complete a privacy impact assessment at the earliest opportunity. You can find out more about privacy impact assessments here.
 

New technology but old privacy risks

The Office of the Privacy Commissioner has given Privacy Week 2023 the theme of privacy rights in the digital age. This theme, and the recent hype around the wave of ‘new’ AI technologies such as ChatGPT, brings new focus to the privacy risks involved with emerging technologies.

However, before worrying about what ‘new’ AI technology might be coming, this is the ideal time to consider what AI, machine learning, and other technology your organisation already has in use that deals with personal information. What risks might your organisation already have with regards to privacy?

Duncan Cotterill’s specialist data protection and privacy lawyers can help you assess the potential impacts of AI technology in your business or organisation, and work with you to mitigate and manage negative privacy impact.

Special thanks to Senior Associate Louisa Joblin for preparing this article. For more information, please contact a member of our Data Protection and Privacy team.

[1] OECD, “Recommendation of the Council on Artificial Intelligence” (2019) Available at: OECD Legal Instruments.

Disclaimer: the content of this article is general in nature and not intended as a substitute for specific professional advice on any matter and should not be relied upon for that purpose.

Related insights

Find an expert