FutureFive New Zealand - Consumer technology news & reviews from the future
Story image
The rise of AI: should we be scared or ecstatic?
Mon, 18th Sep 2017
FYI, this story is more than a year old

Tesla and SpaceX CEO Elon Musk has repeatedly said we need to be more concerned about safety with the increased use of AI. He claims it's a "bigger threat than North Korea", with an AI arms race most likely to cause World War III, and should be regulated. But given the potential benefits of artificial intelligence, is Musk being paranoid? Are robots realistically going to take over?

AI has the potential to deliver scientific breakthroughs like we have never seen or imagined. It could lead to a disease-free future. We could finally have affordable and accessible space exploration, with AI creating the blueprints and materials for a space elevator. It could help us make contact with aliens and lead to a new age of enlightenment. Even technologies that are currently the domain of sci-fi fantasy, such as reverse ageing or the transplanting of consciousness from one biological entity to another, could become a reality.

The problem is that nobody has a very clear definition of what artificial intelligence actually is. 99.9% of what people think is AI is not AI, it's just hype. Machine learning is not necessarily AI, though if it's used in certain ways, a small aspect of it can be. For example Amazon's Recommendations: they're machine learning, but they're not AI.

One definition we can perhaps use is AI starts at the point where a machine can make decisions based on contexts that are unpredictable, which can have unintended consequences. A more alarming definition is when a machine decides to do an action it hasn't been programmed to do: when it invents its own actions.

This is where Musk's concerns come in: that machines could make decisions that are beyond our control, and harmful to us.

Self-driving cars and safety

Self-driving cars are one example where AI can reduce accidents and save lives, but also be problematic. There are estimates that driverless cars could reduce traffic fatalities by up to 90%, saving 300,000 lives per decade and USD $190 billion in healthcare costs associated with accidents. The issue is that when it comes to a potential collision, a self-driving car has to "decide" what to do. This is already happening, it's not something years in the future. For example, Mercedes prioritises the safety of passengers over the safety of the pedestrian: a Mercedes self-driving car will swerve to hit a pedestrian rather than a pole or barrier which could kill the people it is transporting.

This decision is part of the ethics that is being built into self-driving cars. Just as human drivers do, autonomous vehicles will have to make judgement calls based on a multitude of factors. A car could be programmed to swerve into an elderly man rather than a small child. It might have to decide whether to hit a woman with a pram or two teenagers (using biometric recognition technologies to detect age, gender, even physical state of health).

These are human ethics that are being built into AI systems. Whose ethics are they though? How much potential does a single coder have to program in their own ethics, such as prioritising one ethnicity or gender over another? The potential for corruption, to create "evil machines" is immense.

Ethics and bias in intelligence

We saw with the Volkswagen emissions scandal that a very small group - maybe just a handful of people - was able to distort the results of the emissions measuring technology, based on greed. All you need is the buy-in of a programmer who is coding a piece of firmware, and you can impose your own ethics onto a machine.

Take Bitcoin. It was heralded for taking the power away from thousands of bankers, back to "the people". The reality is that it has concentrated power in the hands of a few computer programmers. The recent Bitcoin split: who decided that (it was a power struggle between different camps), if the currency was truly decentralised (and democratic)? Did all Bitcoin holders get a vote? Before we had the issue of "bankers control everything, they're evil" - now we just have a handful of programmers with their hands on the wheel.

A concentration of power?

It's the same with AI: the concentration of power in the hands of a very small group of people, whose biases may not be transparent and whose methods may be poorly understood. It is creating a liability because our oversight is very limited. It doesn't take much to let Pandora out of the box.

Google, which has over 90% of search engine volume, is another example where people are beholden to the ethics of a couple of people. Google has had its own ethics committee for over a decade, with the potential to influence its algorithms and filters that can "disappear" websites and kill businesses overnight. There is a moral bias behind your search results.

More recently Google formed an AI ethics committee, whose operations remain extremely mysterious and non-transparent. Several media organisations have made requests for information but have been denied.

AI and the Internet of Things

The Internet of Things could magnify the potential threat and amplify the opportunity, as everything around us will be making decisions that we have limited or no control over. We don't know how machines may eventually interpret some of the input they get. If we program survival into a machine, can we be certain that it will continue to prioritise human safety above its own survival?

Take a fridge that malfunctions and determines that it needs to deactivate to prevent internal overheating/a fire. The worst that can happen is spoiled food. But swap that fridge for a hospital life support machine. It may have to decide between a 50% chance that it will overheat and cause a fire, potentially endangering hundreds of staff and patients, or switching itself off with a 100% chance that the patient whose life it supports will die.

Power is increasingly flowing into the hands of even fewer people.  Malicious intent and private bias could easily be built into systems, and if these systems are autonomous and eventually "intelligent", it represents an immense threat to humanity. We all need to be concerned about the concentration of power and control in the digital world, and what it represents for AI.

But all this shouldn't negate the benefits. It's possible that AI could keep us safer, by preventing incompetent humans from being able to activate nuclear weapons. In fact, it may take away our need for arms altogether.

Article by Simran Gambhir, Ganemo Group founder