AI Use & Ethics Policy

Introduction

At Here, we recognise that Artificial Intelligence (AI) tools are transforming the way the creative industry operates. They have the potential to automate tasks, improve decision-making, and provide valuable insights. We are committed to embracing technology so that we can continue to innovate and improve our organisation.

However, the use of AI tools also presents new challenges in terms of information security and data protection. This policy is a guide for employees on how to be safe and secure when using AI tools, especially to safeguard against the sharing of potentially sensitive company and client information.

Purpose

The purpose of this policy is to ensure that all employees use AI tools in a secure, responsible and confidential manner. The policy outlines the requirements that employees must follow when using AI tools, including the evaluation of security risks and the protection of confidential data.

Policy Statement

The use of AI tools can pose risks to our operations and clients. Therefore, we are committed to protecting the confidentiality, integrity, and availability of all company and client data. This policy requires all employees to use AI tools in a manner consistent with our security and ethical best practices.

This policy will be reviewed and updated on a regular basis to ensure that it remains current and effective. Any revisions to the policy will be communicated to all employees.

Risks Presented by AI

Error – AI solutions are by no means perfect and present a very real risk of error; use that is not monitored, regulated, or validated gives rise to a high probability of legal claims and a general reduction in quality.

Confidential information – chatbots are produced and managed by third-party organisations and as such, any information entered into them, including personal data and general confidential information, is passed to the third party.

Security – the full capabilities of all AI applications in the marketplace cannot be fully understood, particularly because of how quickly the sector is developing. There is a significant risk that AI may be utilised to access our systems by those who intend to cause it harm or otherwise operate contrary to our interests.

Bias – machine learning relies upon information fed to its systems. This means that biases within the data sets fed to AI applications will be replicated and reinforced, risking increased bias in the responses it gives.

Inappropriate information – the response produced by AI applications may not be vetted for appropriateness, increasing the risk that responses produced contain inflammatory, discriminatory or otherwise inappropriate information.

Intellectual property – any intellectual property which we feed to third-party AI platforms risks placing that intellectual property into the public domain for the use of our competitors and parties who are not paying for our services. This may be direct or indirect.

Environmental impact – the rapid growth of AI infrastructure requires significant energy, water and material resources to power and cool data centres; unchecked or inefficient use increases carbon emissions and environmental strain, particularly where providers rely on non-renewable energy sources.

Security Best Practices

All employees are expected to adhere to the following security best practices when using AI tools:

Protection of confidential data

Employees must not upload or share any data that is confidential, proprietary, or protected by regulation without prior approval from the appropriate Governance Team member. This includes data related to clients, employees, or external partners.

Access control

Employees must not give access to AI tools outside the company without prior approval from the appropriate department or manager and subsequent processes as required to meet security compliance requirements. This includes sharing login credentials or other sensitive information with third parties.

Use of reputable AI tools

Employees should use only reputable AI tools and be cautious when using tools developed by individuals or companies without established reputations. Any AI tool used by employees must meet our security and data protection standards (see Next Steps).

Compliance with security policies

Employees must apply the same security best practices we use for all company and customer data. This includes using strong passwords, keeping software up-to-date, and following our GDPR Policy.

Data privacy

Employees must exercise discretion when sharing information publicly and disguise personal information to ensure anonymity. Before uploading or sharing any data into AI tools, employees must ask themselves the question, “Would I be comfortable sharing this information outside of the company? Would we be okay with this information being leaked publicly?” You must refer to the obligations set out above regarding the protection of confidential information, as well as the Protecting the Interests of Here provisions within your contract.

Ethical Best Practices

All employees are expected to adhere to the following ethical best practices when using AI tools:

Fairness and inclusion

Employees must use AI in ways that align with our ED&I commitments and avoid generating, amplifying or validating discriminatory, exclusionary or harmful content. Outputs should be reviewed critically to identify and mitigate bias, ensuring decisions and creative work reflect diverse perspectives and uphold our Company Values.

Transparency and accountability

Employees must remain accountable for all work produced with the support of AI. Appropriate disclosure should be made to line managers and/or Client Services where AI has been used to materially inform client or internal deliverables. Final responsibility for accuracy, integrity and suitability remains with the employee.

Human oversight and judgement

AI must support, not replace, professional expertise and critical thinking. Employees are expected to apply informed judgement, fact-check outputs, and ensure that creative and strategic decisions remain human-led and aligned with our standards of quality and craft.

Responsible and sustainable usage

Employees should use AI tools efficiently and intentionally, avoiding unnecessary or excessive use that contributes to avoidable environmental impact. Consideration should be given to the energy and resource demands associated with AI technologies, aligning usage with our broader sustainability commitments.

Next Steps

We have collectively shared tools, experimentation and practical insights across teams, helping us to better understand where AI could genuinely optimise, inform and support our processes and creative practice.

The Operations Director is now reviewing the security features, terms of service, privacy policy, reputation and any third-party services used by our preferred AI tools to ensure they align fully with our security, data protection and contractual obligations. A further update will follow once this review is complete, alongside any approved guidance on platform usage.

Here Limited is committed to ensuring that the use of AI tools is safe and secure for all employees and clients, as well as the organisation itself. We believe that by following the guidelines outlined in this policy, we can maximise the benefits of AI tools while minimising the potential risks associated with their use.

We're committed to using our business as a force for positive impact. See our BCorp Impact Report here.

Stay inspired.

Our emails are intentional and artfully composed. Expect the kinds of stories worth lingering over.

I’ve read and accept the Privacy Policy.

© 2026 Here Design