The New Zealand (NZ) Government has released Responsible Artificial Intelligence (AI) Guidance for Businesses (Guidance), alongside the first AI Strategy for NZ (Strategy). In this article, we summarise and provide our view on the Guidance. Our article on the Strategy is available here.
The Strategy and Guidance are in line with NZ’s proportionate and risk-based approach to AI, and draws on the OECD values-based AI Principles. The Guidance is a voluntary resource to help businesses use and develop AI systems in a compliant and responsible way.
Here’s what you need to know about the Guidance
Understanding your ‘why’ for AI
As a starting point, businesses should set a clear purpose for their intended AI use, and ensure that this purpose is lawful, and does not impede on legal rights (including privacy and intellectual property (IP)), human rights and commercial rights.
Good business foundations for responsible AI
Businesses need to understand all legal obligations relevant to AI use or development – the Guidance does not remove the need to do so. Relevant laws include but are not limited to:
- Commerce Act 1986: Ensure AI systems do not engage in practices that restrict competition or mislead consumers (e.g. algorithmic pricing collusion, false reviews).
- Companies Act 1993: Uphold Directors’ duties, including due care and diligence, and legal and ethical obligations.
- Consumer Guarantees Act 1993: Ensure obligations to consumers are upheld.
- Fair Trading Act 1986: Avoid misleading or deceptive conduct related to outputs or use of AI tools.
- Human Rights Act 1993 and Bill of Rights Act 1990: Avoid discrimination on sex, race, and other protected grounds in protected areas, and protect civil and political rights.
- Privacy Act 2020 (Privacy Act), and applicable codes of practice: Ensure that the handling of personal information (which may be included as part of AI inputs or outputs) comply with privacy obligations.
- Intellectual Property Law including Designs Act 1953, Copyright Act 1994: Understand ownership, protection and licensing of AI outputs, datasets and underlying algorithms.
Other laws (including sector-specific laws), regulations and standards may apply, as well as those of international jurisdictions where a business operates. Businesses also need to consider and comply with contractual arrangements with third party data or system suppliers.
The Guidance recommends the following additional actions:
- Assemble a diverse team in the areas of security, data and/or AI governance, technology, legal and compliance, privacy, AI education, and stakeholder communications.
- Identify and manage risks at all stages of the AI life cycle. The Guidance provides a range of risk management resources, including The Massachusetts Institute of Technology AI Risk Repository.
- Maintain records to show where and how decisions have been made. This will be essential for answering user or customer enquiries and/or any future audits.
AI system specific considerations
Understand the training data
Business should understand the data used to train the AI system, the context it was collected in, and what uses are permitted, to assess whether the system is right for its intended use and fits with its values. We summarise key recommendations below:
- Training data: Document how the training data was sourced, any modifications made and known biases or accuracy levels. Training data should be of good quality – accurate and ‘clean’, complete, lawfully obtained or accessed, structured appropriately (including to support transparency and explainability), and both relevant and representative for its context for use.
- Accuracy and reliability: AI systems can amplify inaccurate or unreliable outputs, unfairness, discrimination and/or harm to individuals. Be aware of potential biases when training data contains information about people, and/or if using an AI system to make decisions about people.
- Open data: Consider what open datasets are available and relevant for developing or fine-tuning an AI model. This includes data that doesn’t have personal information but has valuable geospatial, health and other datasets made available free of charge.
Dealing with sensitive, proprietary, or personal information
AI can exacerbate privacy risks. Improper data collection and processing can lead to potential privacy and confidentiality breaches, violations of IP rights, or have potential data sovereignty and human rights impacts.
Key points from the Guidance are set out below:
- Consider data attributes: If any data includes proprietary information, confidential business information (such as access keys, source code, or billing details) or personal information, special care needs to be taken to manage the risks of accidental disclosure.
- Personal information: If personal information is used or handled:
- Ensure that a privacy officer is appointed to be responsible for ensuring compliance with the Privacy Act (amongst other obligations).
- Document the legal basis for training data to be collected, used, and/or stored (including consent mechanisms for customer data collection).
- Conduct a Privacy Impact Assessment (IPA) at the design stage. Privacy-by-design helps ensure privacy protection is built into information systems, business processes, products and services from the start. Ensure that Information Privacy Principles (IPPs) are applied to all stages of the AI lifecycle. Refer to the Privacy Commissioner’s guidance for more information (available here and here).
- Be mindful that even use of personal information that is already ‘publicly available’ may be considered unethical, illegal and/or damage reputation in some contexts.
Understanding ownership and IP rights
Training data and models can be sourced from proprietary datasets, open data platforms, or public content (including via web scraping practices). Each source may come with specific licensing terms that describe who may access them, how they can be used, and/or how they must be labelled or attributed.
Open-source datasets may be protected by IP rights and have certain conditions that need to be met in order to copy and/or use those datasets (e.g. Creative Commons licences), including attribution requirements, or restrictions on derivations or commercial use. Similarly, terms of use restrictions in publicly available or proprietary datasets might prohibit web scraping or use for AI training.
When deploying AI systems, businesses should consider the following to ensure they understand and comply with IP and licensing arrangements:
- Licensing assessment: Before using or developing an AI system, conduct a licensing assessment to ensure appropriate permissions are obtained (or that proper authorisation has been acquired) to copy and/or use data for training or other purposes.
- Consider the source: Consider the source of any prompts and data being inputted into an AI system. and whether doing so breaches any IP or other rights.
- Permission options: There are options to ensure training data is ethically and lawfully obtained, including with the permission of the owners of the datasets used – for example, through ‘collective licensing schemes’. Appendix Three of the Guidance refers to new and emerging options for businesses to obtain permissions to use third-party proprietary datasets, including copyright works, in training, refining, or prompting AI models.
- Terms and conditions: Check the terms and conditions of Generative AI models before use, and/or establish written agreements with AI providers to clarify output ownership.
- Ownership: Outputs generated from Generative AI may lack commercial protection or ownership (and claiming IP ownership introduces novel issues relating to the level of ‘human authorship’ and establishing originality).
- Infringement: There is also always a risk that model outputs may (intentionally or inadvertently) be substantially similar to existing copyright works, and therefore be potentially subject to plagiarism and copyright infringement claims.
The World Intellectual Property Organisation has specific Generative AI Guidance (available here). Copyright Licensing New Zealand intends to release a collective licensing scheme later in 2025 to partner AI developers with NZ rightsholders, and we will be keeping a close eye on this progress.
Handling Māori data
Producing, using or handling Māori data in your organisation may warrant special considerations. Guidance from the Centre of Data, Ethics and Innovation provides further detail on Māori data and AI.
New Zealand’s IP laws include provisions for the protection of mātauranga Māori (traditional knowledge), in relation to patents and trade marks. These provisions help prevent the registration of trade marks or granting of patents that would be considered offensive by Māori or contrary to Māori values. Te Puni Kōkiri (the Ministry of Māori Development) is working to address regulatory barriers to the commercialisation of mātauranga Māori.
Supporting AI capabilities – additional resources
The Guidance refers to a range of resources relating to the procurement of AI systems, including an AI Procurement Checklist. Generative AI can introduce additional cybersecurity risks. Refer to joint guidance from the NZ National Cyber Security Centre and other cyber security authorities on a secure-by-design approach to support physical and digital resilience.
The Guidance emphasises the need to grow AI literacy within businesses. Businesses should consider foundational responsible AI training for all staff, and tailored training for those developing or governing AI. The OCED’s Tools and Metrics for Trustworthy AI may assist with building understanding and capability.
Transparency about when and how AI is used in your business is important. The AI Transparency Checklist included in the Guidance is intended to guide businesses to think through how they can share information about their use of AI.
Why it is important for businesses to engage with this Guidance
The Strategy and Guidance send a strong signal from the NZ Government to the business community to invest in AI adoption. The Guidance contains helpful information and provides references to other national and international guidance and resources. However, the task for businesses will be translating recommendations into action, choosing the mitigations that are appropriate for their context, and understanding how our existing laws (which were not created with AI in mind) apply to novel and specific AI risks and issues.
Businesses that have international operations or customers will also need to be mindful of complying with other legal frameworks, including AI-specific laws (as relevant).
As noted in the Strategy document, SMEs currently lag larger businesses in AI adoption. The Guidance recognises that size is not always indicative of opportunity when it comes to responsible AI adoption, and that the Ministry for Business, Innovation and Employment is considering what support would be most useful for small businesses to adopt AI successfully and responsibly. We look forward to seeing the progress on this front.
Businesses play a vital role in connecting with each other and sharing responsible AI approaches and solutions, and industry sectors are encouraged to consider what other resources would support AI-implementation in key industries. We look forward to continuing to engage with our clients and the business community to support the adoption of AI in a responsible and legally compliant way.
Disclaimer: The content of this article is general in nature and not intended as a substitute for specific professional advice on any matter and should not be relied upon for that purpose.