Picard as Borg
Photograph of Dean Lewis

Absolutely, but not for the reasons people talk about on TV. Maybe I should deal with that part first; when most of us hear AI we think of some humanoid robot that is smarter than we are. Yes, that’s coming but I’ll argue there’s little to fear from that.

Why? Almost all AI is very, very limited. In American slang, it’s a one-trick pony. It will draw a picture or drive your car. This is the type of Artificial Intelligence that’s dumb beyond its one job. Your toaster is never going to chat with you about the subtleties of the problems in Gaza. I’m sure of that because it would cost the toaster company money to add the programming, memory and chips to have that conversation. That would seriously drive up the price of your toaster and it ain’t gonna happen.

Android female robot

And the fear of thousands of smart Androids becoming self-aware and overthrowing us is also not on the menu. These things are crazy expensive and will remain so for some time to come. Someday… possibly, but nothing we will need to be concerned about for the next decade or two. A few rules are probably in order.

Here is where the trouble is going to come from: Social Media. In the next couple of years – and I don’t mean ten, you will see a video of some President or PM declaring war or announcing martial law. It will be his/her voice and face. There may even be a news anchor, complete with familiar set and desk, filling in details. And it will all be fake. It will be AI generated but will look and sound real. It’s complete with scenes of soldiers shooting women and children. Cities will burn and society crumbles… except none of it happened. 

Would some groups do it today? You betcha. Except, the technology is not quite good enough yet. This would be an excellent way to spread panic and the kind of chaos to make an entire nation fly apart because it would play on our fears. There are several countries who would be all too happy to flood Facebook with these videos. This is the AI you should fear and I guarantee you it’s coming… soon.

Should we be scared of AI?

Our Rusuk Blog writer Sergey

Back in the 80s, as a teenager, I discovered the world of Isaac Asimov. Still a Soviet boy, I was impressed by his sci-fi and the ideas behind it. I loved his ‘End of Eternity’ with its weird and unusual concepts of time. Much later, in the 2000s, I discovered him as a popular science writer with his fascinating book ‘Choice of Disasters,’ but this is an entirely different story.

Back to the topic. It was Asimov who first, in mass media and contemporary fiction, laid out the principles of designing and developing AI. In his Robot series, he developed the Three Laws of Robotics. I admired his books The Caves of Steel and The Naked Sun, both futuristic and down-to-earth.

Let’s check out these laws, presented from the fictional ‘Handbook of Robotics, 56th edition, 2058 AD:”

The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The Second Law: A robot must obey the orders given it by human being except where such orders would conflict with the First Law.

The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I suspect Mr. Asimov, who, apart from being a sci-fi writer, was also a biochemistry professor at Boston University. He was the pioneer to think about what harm AI can potentially bring to humanity.

Android male robot

Change ‘robot’ for ‘AI,’ and we will get a working model to deal with AI’s potential menace to all of us.

I am not a specialist who imprints those laws into AI, but I am sure, technically, it can be done. Though it is a very challenging task, I believe.

Anyway, my answer to the topic’s title is we don’t have to be scared of AI, but we have to design and handle it responsibly. AI would be humanity’s partner, not a Nemesis.

P.S. Back in 1994, as a foreign exchange student at Baylor University in Waco, TX, I bought Asimov’s book The Caves of Steel, a paper-back, to have a chance to read it in English. I still keep it.

Should we be scared of AI?

Roger Bara

The chances are you have already used Artificial Intelligence today; maybe on your iPhone, or any of your “smart” devices in your home. And no, there is little cause for alarm there.

But that’s not what is worrying some of the most influential computer scientists around the world today. Their concerns revolve around the future misuse, or under-protection of a system that gets evermore clever exponentially, resulting in models that would avoid human control, would be able to replicate, and potentially make decisions at the expense of human interests. In other words, the possible annihilation of humanity. 

Without proper safeguards, AI could easily enhance terrorist capabilities, propaganda, radicalisation, weapons development and attack planning. For ordinary peeps, the likes of you and me, there’s the risk of increased fraud, impersonation, ransomware, currency theft, data harvesting, and voice cloning. That’s just for starters……

Even if AI remains in scientific hands, the future looks bleak. According to James Barrat’s terrifying book “Our Final Invention”, we are on the brink of the Intelligence Explosion, with us humans only remaining in control for so long. Right now, Barrat suggests, superintelligence is racing from the future to meet us. 

Asimov Book Cover

Do you remember J. Robert Oppenheimer, often described as the father of the atomic bomb for his role in creating it? He spent decades afterwards campaigning against its use! Well, that is happening right now with AI, with many leading creators now calling for governments to first of all understand and then regulate what has been invented… 

So, what is my country doing about this? Prime Minister Rishi Sunak wants to present Britain as a world-leader on AI. That’s a bit of a laugh at this time. There is more regulation for a shop selling sandwiches, cakes and biscuits than there is on anything concerning AI.

I am writing this just after Sunak has admitted that the UK would not “rush to regulate” AI, because it was “hard to regulate something you do not fully understand.” This is a man at the head of a party that cannot even grasp how difficult the majority of its people are finding just keeping their heads above water in a huge economic crisis. What hope that they will ever understand even the basics of AI?

Without really intelligent people, (and where are we going to find any politician of that calibre) introducing essential and radical safeguards, we are all condemned. Have a good day; who knows how many we have left.

One thought on “Should we be scared of AI”

Comments are closed.