Apple cofounder Steve Wozniak (aka Woz) has said that the march of AI cannot be stopped – even though he has called for a pause in development – and that we must prepare ourselves for people abusing it to create harder-to-spot scams …
Even the pioneers of artificial intelligence can’t agree on how dangerous it is, with two of the biggest names this week voicing opposing views.
Geoffrey Hinton is often referred to as “one of the godfathers of AI.” He is a key figure in the development of neural networks, has written a great many papers, and won numerous awards for his work in the field.
He was so concerned about the dangers of current AI work that he left his role at Google so that he would be free to speak freely about the risks […]
Jürgen Schmidhuber has been called “the father of AI” for his work in natural language processing within neural networks – the technology behind Siri and Google Translate. He has likewise written a huge number of papers and won awards for his work.
While he and Hinton don’t see eye-to-eye on many things, they do both agree that AI development can’t be stopped. Schmidhuber, however, told The Guardian that he believes the dangers are exaggerated.
Steve Wozniak has been one of those to favor a more cautious approach, recently signing an open letter calling for AI development beyond GPT-4 to be paused.
He does, though, appear to agree with both Hinton and Schmidhuber that AI’s progress cannot be stopped.
Woz: AI cannot be stopped, we must prepare for scams
Woz told the BBC that “we can’t stop the technology,” and he was doubtful that regulation can help.
He said that one of the main dangers of generative AI is that it can help make scams harder to spot, and that we all need to be prepared for this.
Mr Wozniak says he fears the technology will be harnessed by “bad actors” […] He said: “AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are” […]
Mr Wozniak doesn’t believe AI will replace people because it lacks emotion, but he did warn that, in his view, it will make bad actors even more convincing, because programmes like ChatGPT can create text which “sounds so intelligent” […]
He sounded a note of scepticism that regulators would get it right: “I think the forces that drive for money usually win out, which is sort of sad.”
The key, he believes, is educating people about scams and phishing attacks, as these can only grow harder to spot.