Recently, a leading “Futurist” in the UK, Ian Pearson, made some astonishing claims that humans will need to merge fully with Artificial Intelligence (AI) in order to survive in the future. So how worried should we be about the extinction of the human race and is AI the answer to the continued survival of humans?
Ian Pearson works at Futurizon and made the comments when he took part in a panel discussion last month hosted by CNBC in Dubai. If humans can work out how to link up their brains with AI, both will have the same IQ, and this will protect humans against the rise of the machines.
Why are the General Public Worried About the Emergence of AI?
While some of the worlds leading minds such as Bill Gates, Elon Musk and Professor Stephen Hawking have expressed concerns about the emergence of AI, it seems that the general public have followed suit and are increasingly concerned that AI could take over jobs and worse still, go “rogue”, meaning that AI will become too complex for scientists to understand.
Elon Musk has described AI as our “biggest existential threat” and said its development is like “summoning the demon”, whereas Professor Stephen Hawking has said it is a “near certainty” that a major technological disaster will threaten humanity in the next 1000 to 10,000 years.
However, despite these reservations it is predicted that AI will be augmented into everyday life in the next 10 – 20 years, with super intelligent machines being used as pets, for example.
Artificial Intelligence Going “Rogue”
Many believe it is significantly unsafe to develop superhuman computers until a direct link to the human brain and AI can be established, and AI should never get ahead intelligence wise over humans. However, AI has already made great strides in its development, with it driving everything from chatbots, to virtual assistants and even robotics – Dubai for examples has its very own robot police officer with an entire police department being driven and run only by AI.
Such is the growth of AI that a religion and Church has even been founded to follow an artificially intelligent being. The Church, known as the “Way of the Future”, says that humans can become better by following the instructions of a robot that is a billion times more intelligent than humans. Eventually, the “Way of the Future” will have its own gospel called “The Manual” as well as a physical place of worship and even its own rituals. This religion and Church has been founded by ex-Uber and Google engineer Anthony Levandowski, who has named himself a “Dean” of “Way of the Future”, giving him complete control until his death or resignation.
AI can be Used for Good, But Could Also Destroy Humanity
Computer scientists, despite the huge potential of AI, are worried that these machines could become so intricate that the engineers who created them risk not fully understanding how they work, and if these experts don’t understand how AI’s algorithms function, they won’t be able to predict if and when they are likely to go wrong with any accuracy.
For example, intelligent robots or driverless cars could make “out of character” decisions that are unpredictable which may put others in danger. The AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of driving safely.
Some even believe that AI could wipe out humans entirely, and AI has been cited as the “number 1” risk in this century. For this reason, there has been consistent advocation for organizations and government departments to regulate AI technology, with controls being necessary to prevent machines from advancing out of the control of humans.
It is clear that AI is evolving at a faster rate than originally predicted and is being used much more today in mainstream and everyday life than previously thought. Some believe that we should be more concerned about the safety of AI than what is going on in North Korea, and that AI should be regulated to stop it becoming a danger to public safety.
What do you think about the advancement of AI? Has it grown more quickly than anticipated? Does your organization use or develop AI, and do you think it should be regulated to stop it becoming smarter than humanity? Do you think that AI will be the downfall of humans and lead to its extinction, or is this just scaremongering? Join the debate in our CxO Hangouts LinkedIn groups.