A future administered by AI is promising. But at the same time, it’s unnerving.

Sophisticated AI could help in making the world a better place. It may give us a chance to battle cancer and enhance healthcare far and wide, or just free us from menial tasks that rule our lives.

That was the primary topic of discussion a month ago when engineers, financial specialists, analysts, and policymakers got together at The Joint Multi-Conference on Human-Level Artificial Intelligence.

But there was an undercurrent of fear that ran through some of the talks, too. Some people are anxious about losing their jobs to a robot or line of code; others fear a robot uprising. Where’s the line between fearmongering and legitimate concern?

With an end goal to isolate the two, Futurism asked five AI specialists at the conference about what they fear most about a future with cutting-edge Artificial Intelligence. Their reactions, underneath, have been highly edited.

Ideally, considering their worries, we’ll have the capacity to control society in a better direction — one in which we use AI for all the good stuff, such as fighting global epidemics or granting more people an education, and less of the bad stuff.

Q: When you consider what we can do — and what we will have the ability to do — with AI, what do you locate the most unsetting?

Kenneth Stanley, Professor At University Of Central Florida, Senior Engineering Manager And Staff Scientist At Uber AI Labs

I think that the most obvious concern is when AI is used to hurt people. There are a lot of different applications where you can imagine that happening. We have to be really careful about letting that bad side get out. [Sorting out how to keep AI responsible is] a very tricky question; it has many more dimensions than just the scientific. That means all of society does need to be involved in answering it.

On the best way to create safe AI:

All technology can be used for bad, and I think AI is just another example of that. Humans have always struggled with not letting new technologies be used for nefarious purposes. I believe we can do this: we can put the right checks and balances in place to be safer.

I don’t think I know what exactly we ought to do about it, yet I can alert us to take [our reaction to the effects of AI] cautiously and slowly and to learn as we go.

Irakli Beridze, Head Of The Center For Artificial Intelligence And Robotics At UNICRI, United Nations

I think the most dangerous thing with AI is its pace of development. Depending on how quickly it will develop and how quickly we will be able to adapt to it. And if we lose that balance, we might get in trouble.

On terrorism, crime, and other sources of risk:

I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous.

What’s more, obviously, different dangers originate from things like job losses. If we have a huge number of individuals losing positions and don’t discover a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise, there’s a massive potential of misuse.

On the most proficient method to push ahead:

But this is the duality of this technology. Certainly, my conviction is that AI is not a weapon; AI is a tool. It is a powerful tool, and this powerful tool could be used for good or bad things. Our mission is to make sure that this is used for the good things, the most benefits are extracted from it, and most risks are understood and mitigated.

John Langford, Principal Researcher At Microsoft:

I think we should watch out for drones. I think automated drones are potentially dangerous in a lot of ways. The computation on board unmanned weapons isn’t efficient enough to do something useful right now. But in five or ten years, I can imagine that a drone could have onboard computation sufficient enough that it could actually be useful. You can see that drones are already getting used in warfare, but they’re [still human-controlled]. There’s no reason why they couldn’t be carrying some kind of learning system and be reasonably effective. So that’s something that I worry about a fair bit.

Hava Siegelmann, Microsystems Technology Office Programs Manager At DARPA

Each innovation can be utilized for terrible. I believe it’s in the hands of the ones that utilization it. I don’t think there is an awful innovation, yet there will be awful individuals. It comes down to who approaches the innovation and how we use it.

Tomas Mikolov, Research Scientist At Facebook AI

When there’s a lot of interest and funding around something, there are also people who are abusing it. I find it unsettling that some people are selling AI even before we make it, and are pretending to know what [problem it will solve].

These strange startups are also promising things that are some great AI examples when their systems are basically over-optimizing a single path that maybe anyone didn’t even care about before [such as a chatbot that’s just a little better than the last version]. And maybe after spending tens of thousands of hours of work, by over-optimizing a single value, some of these startups come in with these big claims that they did achieve something that nobody could previously do.

But come on, let’s be honest, many of the recent breakthroughs of these groups that I don’t want to name, nobody cared before, and they are not generating any money. They are more like magician tricks. Especially ones that see AI as just over-optimizing a single task that is very narrow and there’s no way they can scale to pretty much anything else other than very simple problems.

Someone who’s even a little bit critical of these systems would quickly encounter problems that go against the company’s lofty claims.
To read more, click here.