It is no secret that artificial intelligence (AI) is dramatically transforming industries. However, many organisations struggle to fully adopt and integrate AI into their operations. With AI operations still developing, many businesses lack a dedicated chief AI officer (CAIO), instead leaving oversight to the CTO or CIO.
Adding a CAIO to the C-suite can be the difference between thriving and failing in this digital age.
With their expertise, businesses can navigate the complex legal landscape and reap the benefits of AI without compromising ethics or safety.
Has AI become the zeitgeist of 2023?
The AI beast has truly taken hold and shows no sign of slowing down. Soaring interest in tools like advanced chatbot ChatGPT has helped the benefits of AI trickle into the mainstream. Numerous AI tools have recently been launched, which can offer greater productivity, efficiency and accuracy while professing to give the layperson back the thing that they value most – their own time.
These open-source AI tools are taking the world by storm yet causing some controversy along the way, mainly due to some questionable ethics and the potential for bias.
As is almost always the case, the tech is developing faster than regulators can keep up with. This is particularly true in the context of AI. Coupled with increased access to the huge computing power which AI can use to crunch vast quantities of information extremely quickly, it is no surprise that we are seeing a watershed moment in public awareness of what AI is capable of.
What do businesses think of AI?
AI is being adopted by a breadth of industries and can provide companies with a range of benefits from profit growth, increased efficiency, improved customer service and better talent management. It’s improving logistics, making work environments safer and fuelling new business models.
However, for these benefits to be realised, organisations must be able to bridge the gap between the growth and adoption of AI with regulations and ethics, which is essential in a world where AI is making such big strides.
If companies don’t jump on the AI bandwagon they’ll likely be left behind and suffer as a result. But to do this, they must navigate the complex world of AI adoption. One way to do that is to add a CAIO to the C-suite.
What are the legal risks and challenges of AI adoption?
As AI is becoming increasingly complicated it also follows that it is harder to identify discrimination. Take a recent example involving an advertising algorithm, resulting in different jobs being shown to men and women. More AI jobs were being shown to men with more secretarial jobs being shown to women. In this scenario, the challenge was that AI was very much doing its job – as more women were clicking on the secretarial adverts than men and vice versa.
Notwithstanding this, the AI system was perpetuating discrimination, so those responsible for the algorithm had to go in and manually correct it. This shows the importance of human oversight to understand how the underlying technology works, and to meaningfully interrogate and challenge its output. Crucially, it also demonstrates that this problem was spotted and resolved.
The use of open-source AI tools is not without risks, including legal actions and potential claims from licence holders. For example, using only open-source licence tools like ChatGPT that restrict commercial use could leave businesses open to legal action. Profit implications are another uncertainty – the inherent nature of open-source is to make the software available for free and some licensors state that any re-use of their tools and code must be made available for free.
Lastly, using open-source tools, code or images within other applications or elsewhere in the business may inadvertently restrict how a business can protect its own IP, and this could potentially devalue an organisation.
It takes vigilance and proper compliance mechanisms to comprehensively understand and monitor what you are using and the obligations that flow from each open-source tolling or software use. This is a journey that we are seeing legal and IT teams taking together and again supports the need for a senior AI role in the business.
How would a chief AI officer support the adoption of AI?
The UK government’s AI strategy sets out several core principles aimed at encouraging innovation and using AI, while ensuring that public and fundamental rights are protected. One of these is the need to have “an identified legal person” that is responsible for AI, similar to a data protection officer, to ensure policies and legislation are properly applied and, ultimately, who is responsible for taking action when things go wrong.
This role would bridge the technical function and understanding of the outputs of that technology, the legal risks and the ethics. This trend is likely to continue as we see the new AI regulation come into force in the EU, alongside the expected sector-specific AI guidance in the UK.
As for how discrimination is approached, not only will these instances be subject to scrutiny under The Equality Act, but we are likely to see sector-specific guidance from the likes of the FCA, ICA and other regulatory bodies.
The chief AI officer, or similar role, will be responsible for the awareness and adherence to this. It will not be enough for businesses to blame the technology or a third party, although they may be able to claim partial losses, the expectation will be that they have completed the necessary due diligence ahead of their adoption of AI.
Having a CAIO on board can be a game-changer for businesses. It can help overcome considerable challenges and ensure the right decisions are made when it comes to AI adoption.
In fact, adding a chief AI officer to an organisation can be the deciding factor between success and failure, which, in today’s increasingly digitised world, many would argue is non-negotiable.