shift-festival_resized

AI will turn into superintelligence in just a few decades

Will it be possible to build a machine that could function as a human? Professor Nick Boström from the University of Oxford sure thinks so. As one of the keynote speakers at the Shift Business festival in Turku, Finland, he urged scientists, companies, and governments to focus on the “big picture” - the future of machine technology.

The well-known computer scientist, Oxford professor and the director of the Governance of Artificial Intelligence Program, Nick Boström believes that full range human capability technology might be a reality as early as 2045-2050.

“It might happen even faster. When technology itself starts optimising AI, then developing AI might result in superintelligence, faster,” Boström said.

Safety must come first

Boström stated that the key challenges in the future of AI are safety and governance problems.

As in most sci-fi thrillers with robots, the reality is that we might soon be facing machines smarter than us. Just developing AI without actually focusing on what you get may lead to severe, movie-like problems.

“You have to be very careful what you wish for. Once the AI becomes extremely smart, it might pursue its set goal (for example collecting as much pine nuts as possible) and kill us all in the attempt of reaching the goal,” Boström noted.

The hottest question in developing superintelligence is how to transfer human intelligence into a machine. Boström says that for AI to develop to that level, method and concept learning of machines must be included in the development process.  

“People learn by doing, seeing, and experiencing. Machines don’t. It is also very hard to program human values to an AI, because human values are so complicated. It requires concept learning,” Boström told an intrigued audience.

For safe development and control, Boström emphasized finding ways to measure AI and its functions.

“We need scalable control to make sure that AI works as we want it to. Even when it becomes more intelligent. There has been talk about a black box policy for AI,” Boström pointed out.

Changes in communication are necessary

Boström believes that there should be regular contact between government and AI development to make sure everything is done properly.

When superintelligence becomes a reality, the value of AI would increase dramatically, which might affect the safety aspects. If there is a lot of competition to get it out on the market first, it might be a big risk factor.

“Wherever (and whoever) the developers are, they should have the opportunity to pause, wait and test how it really works. But what if there is a lot of competition to be the first to get it out? That would increase the risks in AI safety,” he emphasised.  

That said, Boström agrees that it is too early for governments to start limiting the development of AI.

“Superintelligence should be used in common interest, not controlled by a government. It should however be considered thoroughly. It is important to know the goals for which these super intelligent machines will be used for.”

Some countries, like Canada and China already have a plan for AI. In short term, most countries have to start taking AI into consideration in their legislation – at least in relation to the self-driving cars.

Everything is still a mystery

Boström is of the opinion that the next wave of technical development will be seen shortly.

“Right now it is just hopes and fears. We don´t have enough knowledge to know what exactly is going to happen,” he acknowledged.

Looking into deep future is difficult, as technology evolves and changes so rapidly. How will we know that superintelligence has been achieved?

“Having a full-length discussion with AI will be telling. Right now, a machine cannot do that without getting confused,” Boström concluded.  

Professor Boström hopes that governments will start to work on their legislation to make sure superintelligence is used in the common interest.


Written by
Riitta Alakoski