Does AI Pose Any Threats?
Artificial Intelligence technology poses a lot of threats and potential problems. i write about it a lot on this site, but here’s a short summary of some of the issues.
Militarily.
Nations are concerned to have their military commanded by the best commanders. You’d be nuts not to do so.
But what if all your military strategy war games consistently show that an AI is the best player?
Back in WW2 the Japanese were able to wipe out the entire American Pacific Fleet in a single two hour surprise attack.
Now two hours might wipe out the whole of the USA. Two days would certainly do it.
Start with cyber warfare corrupting command, control and communications.
Take out the satellites. Without communication, observation, and GPS satellites the US capacity for offshore retaliation degrades by 75% immediately.
Follow up with nuclear, biological and nanotech strikes and anything else in the arsenal. And with any luck you can present a fait accompli to the UN with the US asking for peace.
Human Beings can’t react to complex threats and vast quantities of information quickly enough, so - you put an AI in control of your military.
And its orders are to protect itself and the country as best as possible.
But just how well can you test such a system?
How much can go wrong?
Consider how HAL, the well designed AI in 2001: A Space Odyssey, becomes a homicidal threat by simply struggling to comply with orders that were outside its operational design parameters. (You need to read the book to understand this).
Information
Information is power. Ask Rupert Murdoch.
And Donald Trump, Adolph Hitler, Stalin, Kim Jong … etc etc. all understand that controlling the information flows, the media, the indoctrination / education systems, etc provides great power and silence opposition.
But the boundaries of Human Knowledge are progressing so fast that even human experts are struggling to keep up to date within their own field of expertise.
But an AI like IBM’s Watson can handle it. It can read and assimilate thousands of articles in many different knowledge disciplines.
After all - AI’s have already demonstrated that they can deal with ‘big data’ better than Humans in many instances.
So - what happens when we hand the task of assimilation and representation of information and knowledge over to the AI expert researchers?
AI’s can be distributed. They can dwell in the internet, without location. They can facilitate internet functioning, encrypt and decrypt communications as they flow, trace connections etc. The advantages to Telco’s, the NSA, and every other country and company of having digital intelligences on the internet are obvious, but given the regularity with which agencies and companies are being hacked these days - who ends up with the information? The AI itself?
What regulatory authorities are there on the internet?
Labour Market
The impact of technology on the labour market has always had deep sociological implications.
For a while machines replaced hard work and brute force.
Then they replaced skilled fingers with the spinning jenny etc.
Then they replaced switchboard operators.
But AI will have a major impact on fields which require intelligence.
The middle class who have rested on their intelligence skills, from doctors and lawyers to teachers and researchers, may now find that AI is eating specifically into these jobs.
Which means the guy with the masters degree is trying to get the job caring in the age care nursing home - which doesn’t pay well.
And everyone else is competing with them for the low paid jobs.
Finance
We already have AI trading programmes that can make trading decisions faster (and better?) than Humans. And you want the best looking after your retirement nest egg - right?
So what happens when a billion humans hand over their wealth to be managed by AI expert program?
Don’t we even control our own wealth anymore?
If we put AI in charge of the wealth, we’re precious close to putting it in charge of the economy.
And how far off is that from putting it in charge of the politicians / politics?
Legal & Ethical
Our legal systems are supposed to enable our nation, society and economy work. They have a strong bias in favour of the haves (who can afford lawyers and to exploit loopholes even if rule of law applies and the judicial system itself isn’t corrupt, which in many countries it is). And they pay particular attention to personal rights, responsibilities and profits.
Our legal system commonly recognises that Human Beings (even slaves now) have personal rights and responsibilities.
And it recognises artificial persons to have rights and responsibilities - such as companies.
But what about AI’s?
Saudi Arabia has recognised an AI robot as being a person - and being allowed to drive. Thin end of the wedge. Even female humans weren’t allowed to drive in Saudi Arabia.
But what are the legal rights and responsibilities of an AI?
If a driverless car tries to cross the border between Mexica and the USA, and border security finds drugs in the car, who is responsible? Who gets arrested?
Is an AI something that is owned? Does it have rights? Does it have responsibilities?
AI’s make decisions. Even a chess playing AI decides what move to make.
But it doesn’t have Free Will. We know this for certain.
So should it be held legally responsible for the decisions it makes?
But then, how do we know that Humans have Free Will? Is it rational to hold humans legally responsible for the decisions we make?
And what if an AI becomes smart enough to make determinations about taxation, immigration etc.?
An AI I is now giving rulings on Human futures and lives.
Perhaps we can make an AI that gives good decisions in court. Perhaps we can have an AI judge.
It can even give on-line judgements in an on-line court. That’s a whole lot cheaper than getting everyone together in a given place and time. The cost of courts for deciding basic things from driving offences to bail applications is horrendous. Let’s use AI to get the cost down.
So now we have an AI judge making determinations about human lives?
Social Ethics
And if the Nazis tell the AI to assist in processing Jews through the death chambers, does the AI just do its job efficiently, or consider the ethics of the situation?
Whence does an AI derive its ethics?
Do we regard AI’s as the new slave class, or do we become the new slave class to the AI’s?
Can we even define AI’s as a class? From Siri to Watson, from a smart phone to a DVD player, from a distributed AI like SkyNet to a smart-bot or a domestic robot appliance, how can we define an AI for legal purposes and determine its rights, responsibilities and privileges?
If an AI writes something, who owns the copy-write?
We see a lot of software produced by the open source movement. So what about an AI produced by open source? Who ‘owns it’? Who’s responsible for errors it makes? Where does the Quality Assurance lie?
These questions have been at the heart of the social impact of digital intermediaries like Uber and Air B and B. But AI can take them to a whole new level.
A topic for a book, but this will have to do for now for a blog item.
Perhaps I’ve asked more questions than I’ve given answers.
Some of these questions are already being asked in many different forums including Quora.
But good answers are hard to find.
And we don’t have long to find them.