Image credit: ThomasAFink via Shutterstock
Figures in the UK AI community have had a mixed reaction to the strongly worded warning that AI posed a genuine risk of human “extinction”.
The joint statement, organised by the Centre for AI Safety and signed by prominent AI experts including OpenAI’s Sam Altman, was published yesterday.
The statement read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
The warning, which was also signed by the chief executive of Google DeepMind, comes as tools like ChatGPT have exploded into the mainstream and are disrupting the world of work.
There has been a mixed reaction to the stirring statement, with some suggesting the risks are being overblown, and others in full agreement that AI has the potential to cause serious harm to humanity.
Most in the industry have accepted that certain risks exist. However, some feel the language used in the statement is unhelpful.
“It’s necessary to balance the narrative,” said Daniel Hulme, CEO at Satalia. “AI is not merely a harbinger of a dystopian world. It is a tool – like fire or the wheel – that, in the hands of a mindful humanity, can drive us to unprecedented heights.”
Hulme added that it was a waste of energy to debate whether AI is a potential cause of extinction, suggesting that the community should instead channel its efforts into creating a framework for responsible AI.
“The future is unknown, but also our gift to create, so let’s make the conversation around AI empowering and inspiring, not mired in fear.”
Dr Chibeza Agley, co-founder and CEO at OBRIZUM, similarly feels the idea that “we’re on the precipice of extinction” is “premature”.
Agley said: “AI has the potential to truly change society for the better, delivering productivity in ways never imagined. However, as with any new technology or innovation, there is the potential that it could be used for nefarious means, so we all need to be mindful of the risks.”
AI should be ‘co-pilot not autopilot’
Not everyone felt the comments were overblown, however.
“The language is fully justified, given the scale of potential risk of rapid AI acceleration,” said Marc Warner, CEO of AI firm Faculty and member of the prime minister’s AI council.
“There are no previous circumstances where a vastly more intelligent thing has been constrained by something much, much less intelligent – and that is what humans will face with AI.”
While Warner believes AI is an incredible technology that can support human endeavours, he warned that AI should be a “co-pilot and not an autopilot. It should be used to supplement critical decision making and help us plot the consequences of possible actions – all whilst keeping humans in charge”.
Dr Leslie Kanthan, co-founder and CEO of TurinTech, told that the use of the word extinction was fully justified, as its aim was to “underscore the potentially grave consequences of insufficiently cautious approaches to AI development”.
Kanthan said that even though the word choice was dramatic, it employed the necessary level of weight and stress that is needed to take “proactive measures to avert catastrophic outcomes”.
Kanthan added: “It is crucial to emphasise that the intention is not to instil unnecessary fear or impede valid discussions, but rather to cultivate a sense of urgency and prioritise the meticulous evaluation of AI safety.”