AI — The chicken or the egg? My thoughts on the Sunak-Musk interview
Last week's interview following the AI Safety Summit has gone viral. Firebrand Head of Data/AI Curriculum, Sean Rafter shares his thoughts.
AI has been in the news again this week with the AI Safety Summit in Bletchley Park; the star of the show was undoubtedly Elon Musk, as UK Prime Minister Rishi Sunak hosted an hour-long Q&A with him about the risks associated with AI.
I am firmly in the “AI will benefit society” camp, but listen with interest to the other side of the spectrum.
In fact, when ChatGPT was first released, in November 2022, I asked it just for fun what it thought — would it end the human race? ChatGPT quickly replied, “AI will.” I then told it that humanity would turn it off to stop that from happening, and ChatGPT then replied that AI would have predicted that response from humanity and would work around it...
I suspect that this was ChatGPT playing with me and my love for the Terminator film series — and, probably, driven by an amused human technician in the background.
I wrote out the same prompt the day after to show a colleague and ChatGPT had a completely different response which didn’t include it being the destroyer of humanity.
On a more serious note, there are risks associated with AI. Even the most ardent AI supporter has to be able to admit that AI will need legislation and, as with any new technology, it's difficult to see exactly where those pieces of legislation will be needed at this moment. Add to the equation the UK's already sizeable digital skills gap and the need to upskill workers, and the UK Government, as well as Governments around the world, will need to generate AI laws to keep us safe.
However, whilst a lot of the press coverage of Elon and Rishi focussed on the risks associated with AI (using Elon’s recent recommendation to pause AI development as further evidence), my take was that Elon is still a very big AI supporter and quantified the benefits to risks as 80% to 20%.
There were some very specific and over-the-horizon risks articulated in the Q&A; for instance, the ability to have “kill-switches” or “kill-terms” to stop Humanoid Robots from stalking humans. On the face of it, this sounds like a good idea, but I can’t help but think that once those “kill-terms” get out onto the internet, people are likely to exploit them for amusement and abundant likes on social media.
Perhaps this is, in fact, one of the more clear and present risks to Elon — social media and the ability of AI to create thousands of humanlike profiles or “bots” that can then be used to adjust the views, likes or comments around social media topics. Using social media algorithms to their advantage, these bots can look to change the accepted narrative around local or global topics, including elections, to suit their own needs.
I think the larger risk around AI is human ingenuity and creativity; I haven’t yet seen AI produce anything I would class as truly ground-breaking or innovative compared to what humans can or have created; I wouldn’t want human ingenuity to be stifled by the emergence of AI.
My personal view is that AI will challenge governments and society like no other technology before it. As the pace of AI evolution is hugely outstripping any government’s ability to adapt and legislate, could the answer be to have AI support the creation of legislation? Would that be classed as 'chicken or egg'?
(Photo: Financial Times)