Error – AI solutions are by no means perfect and present a very real risk of error; use that is not monitored, regulated, or validated gives rise to a high probability of legal claims and a general reduction in quality.
Confidential information – chatbots are produced and managed by third-party organisations and as such, any information entered into them, including personal data and general confidential information, is passed to the third party.
Security – the full capabilities of all AI applications in the marketplace cannot be fully understood, particularly because of how quickly the sector is developing. There is a significant risk that AI may be utilised to access our systems by those who intend to cause it harm or otherwise operate contrary to our interests.
Bias – machine learning relies upon information fed to its systems. This means that biases within the data sets fed to AI applications will be replicated and reinforced, risking increased bias in the responses it gives.
Inappropriate information – the response produced by AI applications may not be vetted for appropriateness, increasing the risk that responses produced contain inflammatory, discriminatory or otherwise inappropriate information.
Intellectual property – any intellectual property which we feed to third-party AI platforms risks placing that intellectual property into the public domain for the use of our competitors and parties who are not paying for our services. This may be direct or indirect.
Environmental impact – the rapid growth of AI infrastructure requires significant energy, water and material resources to power and cool data centres; unchecked or inefficient use increases carbon emissions and environmental strain, particularly where providers rely on non-renewable energy sources.