"Should we be concerned?" That, of course, depends on just what one's concern actually is. For example, AI is already used to write many online articles for several big name publishers. The output has often been said to come pretty danged close to a human writer's. Should we be concerned? If so, to what degree? As it turns out, AI writing articles only works for certain kinds of articles, say baseball games. There are definitive stats and generally accepted list of terms and colloquialisms, etc. So, feed all of that into a system and it doesn't take much, really, to kick out articles that appear to have been written by humans. Again, where/how much is the concern?
We must assume (lest we be danged fools) that Asimov's rules for robots (and by extension AI) will not apply in the future. There will undoubtedly be those whose sole purpose is to create AI with the intent of doing harm to humans, intentionally or not. Take, for example, morality tests. AI will not have morals, so the decision it makes will be based on a set of rules either programmed into it or via its own developed system it acquired as it "learned" about humans and "living" on this planet. Which track in the "Trolley Test" would the robot choose? Which one would you choose? Are they indistinguishable? Is that the part that scares people?
I think that is what scares people: AI will face the same scenarios as humans and will make the same choices humans make - or could make. We are guided by multiple sets of rules - societal, internal, parental, spiritual (for some), and so on. So, what if we design AI such that it does not adhere to a certain set of programmed black-and-white rules, but rather it lives in the same gray world we do? That is what scares most people, I would venture. The extension of that, then, is whether or not humans are even "needed" on the planet.
My argument with colleagues has often been that AI cannot be as irrational as humans. We do things based not only on sets of rules running around in our heads, but also by emotion or seemingly random desires. I could get up right now, go to the kitchen and grab a bag of chips and a soft drink, knowing my family will be eating dinner within 30 minutes. It makes no sense and would actually be detrimental in a variety of possible ways (health, make wife upset that I ate and am not hungry, make myself sick if I ate the chips and then ate a full meal, etc). Will AI ever act so irrationally? Would AI ever get to the point of suicidal thoughts and actions? What about last moment changes in our behavior? Everything a particular person thinks, feels, etc has led that person to take his own life, but at the last moment, they opt not to. No logical reasoning, no "pros and cons" list. They just don't do it. Will AI ever be in a similar situation? And if so, what actions would the AI take?
Should we be concerned? That depends.