Should we fear advances in AI?

Photography by Josh Rawlinson

Whether it’s Skynet or HAL9000, we have a creeping worry about rogue AI, and for good reason. Science fiction has arguably paved the way for much scientific progress by imagining things which only later become possible. Further still, when it is not serving as an exploration, it is serving as a much needed warning.

The Campaign Against Killer Robots sounds like the subject of an April Fool’s news headline, but it is in fact incredibly serious, and not without good reason. Drones are commonplace in warfare, a weapon that allows its operator to stay away from danger. These remote-control killers are only one more step away from something still more terrible; a weapon with complete autonomy. This is the aim of the Campaign Against Killer Robots. Autonomous weapons may not have made the news yet, but they are known to be in development my the militaries of several nations. It seems like a fairly clear conceit that if weapons can’t be trusted in the hands of humans, they certainly shouldn’t be trusted alone. Governments regularly visit Geneva to discuss which weapons should be made illegal with respect to human rights. They’ve previously banned blinding lasers in anticipation of their development. The Campaign argues that autonomous machinery would put battlefield decisions (and potentially even policing) in the hands of machines. To remove human control from life-or-death decisions would be to remove said decisions from moral judgement or justifiability.

In a more domestic setting, the AI is also creeping into the home. Cortana, Siri and Alexa are names which all carry very different connotations now. Here the concern is not so much the machine’s autonomy. The concern here is with privacy. Cortana, Siri and Alexa are all eerie disembodied mascots of three enormous tech giants who want your data. The argument over which out of Microsoft, Apple and Amazon is the most ethical with data sharing is a different subject entirely. Nonetheless, they all now have an ear in the homes of anyone that buys one. Perhaps not an ear that always listens, but certainly one that records what you say to it. This is known because recording speech patterns and learning to recognise them better (and eventually replicate them) is necessary for the development of these domestic AI. When thousands of Amazon dots are shipped out, each one is capable of gathering data to feed the algorithms, improve machine learning, and make for a more fluent Alexa.

Some may have no issue with automating more aspects of our lives. Admittedly, it may make them easier, but this comes at the price of data. I personally support the use of big data for public research, and not for private corporate gain. This is why I report every Facebook advert I see as inappropriate, and will not remain highly sceptical of the personal assistants for the foreseeable future.

AI has a long way to go, not just in its inherent effectiveness, but more so in the way we present and approach it and the psychological effects of this. Research shows exposing children to these technologies could present a particular danger. One particular article on this notably disregards reports of ten-year-olds abusing Siri with; “Children yell at their toys all the time.” There is an inherent logical flaw here, I’d argue that smartphones are tools, not toys.

As if the normalisation of AI in the family isn’t disconcerting enough – we should stop any normalisation of AI in warfare, where it can really be damaging, before it genuinely does become too late.

Mentioned today in Headcandy,

Campaign Against Killer Robots

https://www.stopkillerrobots.org

“Siri, You’re Messing Up a Generation of Children”

Judith Shulevitch

https://newrepublic.com/article/117242/siris-psychological-effects-children

 

Leave a Comment