Engineering · Artificial intelligence

Should we be concerned with recent advancements in artificial intelligence?

Eleanor Carman Incoming BLP Sales Associate at LinkedIn

May 26th, 2015

While self-aware robots are not a current threat, many big names in tech are worried about the possible introduction of these robots into society. Artificial intelligence is still in the very early stages of development, but is this something we should even be dedicating resources to? I’m not quite sure of what my personal opinion is on the matter, but I’m interested to see what you all think of this debate.

Max Goff

May 26th, 2015

Actually, genetic algorithms and genetic programming are but two disciplines in AI that do give rise to evolving 'intelligence,' the inner-works of which can quickly become inscrutable.  This is to say that, yes, machines can do a lot more than they are programmed to do, and sorry, you cannot really know nor fully understand how they arrived at their conclusions, even though those conclusions may be superior to human cognitive processing.

The question is can machines be built with general purpose human competitive intelligence?  The Turing Test is an excellent rubric in that regard, and to date it has not been conquered.

I my view, the advent of general purpose human competitive AI is not in itself an existential threat to humanity.  But it might be more of an indirect threat.  Clearly if AI is able to replace humans in all avocations. and do the work cheaper and better, then no human job is safe.  How we cope with the social and economic consequences will determine if we survive -- not because of a Matrix like war with the machines, but more like a new serfdom imposed by those who own vs. those who do not.  The have/have not pattern can easily be magnified and rigidly enforced if we do not figure out a better model.  AI just might be our downfall in that regard.  But probably not because of evil machine consciousness.

Kevin Goldstein IT

May 26th, 2015

I feel it's the same as any large paradigm shifting development the human race has been through in it's history (examples: Plato argued against literacy, Industrial revolutionists feared there would be no more manufacturing jobs, and opponents of the television said it would turn the human race into mindless drones). The only consistent thread across these shifts and changes is that (a) yes, it changed the way we viewed and interacted with the world, and (b) It was never how we expected/predicted it.

So, is it something we should be concerned about and fear? no... Is it something that will impact how we interact with the world around us? yes... but probably not in the way we expect it to.

Lastly, large scale changes are usually only dangerous to those who are unwilling to learn and adapt.


May 26th, 2015

"Should we be concerned?" That, of course, depends on just what one's concern actually is. For example, AI is already used to write many online articles for several big name publishers. The output has often been said to come pretty danged close to a human writer's. Should we be concerned? If so, to what degree? As it turns out, AI writing articles only works for certain kinds of articles, say baseball games. There are definitive stats and generally accepted list of terms and colloquialisms, etc. So, feed all of that into a system and it doesn't take much, really, to kick out articles that appear to have been written by humans. Again, where/how much is the concern?

We must assume (lest we be danged fools) that Asimov's rules for robots (and by extension AI) will not apply in the future. There will undoubtedly be those whose sole purpose is to create AI with the intent of doing harm to humans, intentionally or not. Take, for example, morality tests. AI will not have morals, so the decision it makes will be based on a set of rules either programmed into it or via its own developed system it acquired as it "learned" about humans and "living" on this planet. Which track in the "Trolley Test" would the robot choose? Which one would you choose? Are they indistinguishable? Is that the part that scares people?

I think that is what scares people: AI will face the same scenarios as humans and will make the same choices humans make - or could make. We are guided by multiple sets of rules - societal, internal, parental, spiritual (for some), and so on. So, what if we design AI such that it does not adhere to a certain set of programmed black-and-white rules, but rather it lives in the same gray world we do? That is what scares most people, I would venture. The extension of that, then, is whether or not humans are even "needed" on the planet.

My argument with colleagues has often been that AI cannot be as irrational as humans. We do things based not only on sets of rules running around in our heads, but also by emotion or seemingly random desires. I could get up right now, go to the kitchen and grab a bag of chips and a soft drink, knowing my family will be eating dinner within 30 minutes. It makes no sense and would actually be detrimental in a variety of possible ways (health, make wife upset that I ate and am not hungry, make myself sick if I ate the chips and then ate a full meal, etc). Will AI ever act so irrationally? Would AI ever get to the point of suicidal thoughts and actions? What about last moment changes in our behavior? Everything a particular person thinks, feels, etc has led that person to take his own life, but at the last moment, they opt not to. No logical reasoning, no "pros and cons" list. They just don't do it. Will AI ever be in a similar situation? And if so, what actions would the AI take?

Should we be concerned? That depends.

Lane Campbell Lifelong Entrepreneur

May 26th, 2015

If Elon Musk and Bill Gates are speaking publicly about it then yes we should be concerned.  

Karl Schulmeisters CTO ClearRoadmap

May 26th, 2015

"AI' is hardly in  "the very early stages of development".  So called "big data" is all about AI.  There is a profound shift in social economy coming.. and coming very fast.  Which is why Musk and Gates are talking about it.  Here are some relevant texts:

Who Owns the Future

The Second Machine Age

Race Against the Machine

Craig Walmsley MD @ Progenit. Founder @ rtobjects. Locke Scholar.

May 27th, 2015


Because we really don't have a proper grip on what "Intelligence" amounts to, and therefore only a very vague notion of what it might take to create it. 

So I wrote an article on roughly where we are now, and what it might take to actually create "intelligence". 

Reuven Granot Corporate Strategic and Scientific Officer at Perlis Ltd

May 26th, 2015

As our expertise is in the field of robotics and AI, I would like to state that I have nothing to contradict the different opinions displayed in Science by JOHN MARKOFF on MAY 25, 2015 (the above linked article). However, in my opinion the question is if in principle robots may or not be manufactured with superior intelligence than we as humans have. At least theoretically, robots deigned using Multi Agent Systems are an assembly of simple parts and their intelligence may develop following a similar procedure evolution does. Just much faster. We are still very far from robots, which are controlled safely in order to replace combat soldiers. We are even farther from robots, which may control their development to frighten human civilization.  But it is not impossible… 

Brendan Gowing CTO at CENTURY Tech

May 27th, 2015

No. It's media hype.

"I'd like to think that this is rock bottom. Journalists can't possibly be any
more clueless, or callously traffic-baiting, when it comes to robots and AI."

Ming Tsui

May 26th, 2015

We should be concerned about human evil since robots are just machines created by humans.

That human evil element is one we should be concerned with and how to deal with those who

wants to create evil robots that will harm humanity.

Tony Dobaj CTO at Bizzz

May 29th, 2015

It's interesting to me that virtually all explorations of this topic in popular media (including the very well done ex Machina) equate the human condition with intelligence. This is a very bad assumption. To wit, homo sapiens is the result of Darwinian evolution, which by definition has the survival instinct at its core - it and us are inextricably linked, and no amount of social evolution will change that fundamental truth. The emergent behaviors of an AI, on the other hand, are not sullied by the survival imperative. It is for this reason that I have exactly the opposite outlook - AI will be our salvation, not our destruction. As for the extinction of the human race, no intelligence is going to care whether the vehicle of its sentience is engineered or evolved. While it's impossible to predict what happens then, it makes perfect sense that a being not hampered by genetic idiosyncrasies would gravitate away from, not toward, the collective insanity that we as humans are witness to every single day. The singularity is our destiny, and I can't wait.