contact us today on 07855 320 479

Do humans make computers smarter?

By on Nov 10, 2016 in Blog, News

Humans still outperform computers at many tasks, but as AI advances, will our intervention help them or hobble them? It’s complicated.

Source: Do humans make computers smarter?

 

As machine learning makes computers smarter than us in some important ways, does adding a human to the mix make the overall system smarter? Does human plus machine always beat the machine by itself?The question is easy when we think about using computers to do, say, long division. Would it really help to have a human hovering over the machine reminding it to carry the one? The issue is becoming less clear, and more important, as autonomous cars start to roam our streets.

Siri, you can drive my car

Many wary citizens assume that for safety’s sake an autonomous car ought to have a steering wheel and brakes that a human can use to override the car’s computer in an emergency. They assume – correctly for now – that humans are better drivers: so far, autonomous cars have more accidents, mainly minor and caused by human-driven cars, but I’m willing to bet that the accident rate for cars without human overrides will be significantly lower than for cars with them, as the percentage of driverless cars increases, and as they get smarter.

Does human plus machine always beat the machine by itself?

After all, autonomous cars have a 360-degree view of their surroundings, while humans are lucky to have half that. Autonomous cars react at the speed of light. Human react at the speed of neuro-chemicals, contradictory impulses, and second thoughts. Humans often make decisions that preserve their own lives above all others, while autonomous cars, especially once they’re networked, can make decisions to minimize the sum total of bad consequences. (Maybe. Mercedes has announced that its autonomous cars will save passengers over pedestrians).

In short, why would we think that cars would be safer if we put a self-interested, fear-driven, lethargic, poorly informed animal in charge?

A game of Go

But take a case where reaction time doesn’t matter, and where machines have access to the same information as humans. For example, imagine a computer playing a game of Go against a human. Surely adding a highly-skilled player to the computer’s side — or, put anthropocentrically, providing a computer to assist a highly-skilled human — would only make the computer better.

Actually not. AlphaGo, Google’s system that beat the third-ranked human player, makes its moves based on its analysis of 30 million moves in 160,000 games, processed through multiple levels of artificial neural networks that implement a type of machine learning called deep learning.

AlphaGo’s analysis assigns weights to potential moves and calculates the one most likely to lead to victory. The network of weighted moves is so large and complex that a human being simply could not comprehend the data and their relations, or predict their outcome.

AlphaGo

Alpha (Photo: Google)

The process is far more complex than this, of course, and includes algorithms to winnow searches and to learn from successful projected behaviors. Another caveat: Recent news from MIT suggests we may be getting better at enabling neural nets to explain themselves.

More: Should your self-driving car kill you to save a school bus full of kids?

Still, imagine that we gave AlphaGo a highly-ranked human partner and had that team play against an unassisted human. AlphaGo comes up with a move. Its human partner thinks it’s crazy. AlphaGo literally cannot explain why it disagrees, for the explanation is that vast network of weighted possibilities that surpasses the capacity of the human brain.

But maybe good old human intuition is better than the cold analysis of a machine. Maybe we should let the human’s judgment override the machine’s calculations.

Maybe, but nah. In the situation we’ve described, the machine wants to make one move, and the human wants to make another. Whose move is better? For any particular move, we can’t know, but we could set up some trials of AlphaGo playing with and without a human partner. We could then see which configuration wins more games.

The proof is in the results

But we don’t even need to do that to get our answer. When a human partner disagrees with AlphaGo’s recommendation, the human is in effect playing against AlphaGo: Each is coming up with its own moves. So far, evidence suggests that when humans do that, they usually lose to the computer.

Maybe we should let the human’s judgment override the machine’s calculations.

Now, of course there are situations where humans plus machines are likely to do better than machines on their own, at least for the foreseeable future. A machine might get good at recommending which greeting card to send to a coworker, but the human will still need to make the judgment about whether the recommended card is too snarky, too informal, or overly saccharine. Likewise, we may like getting recommendations from Amazon about the next book to read, but we are going to continue to want to be given a selection, rather than having Amazon automatically purchase for us the book it predicts we’ll like most.

We are also a big cultural leap away from letting computers arrange our marriages, even though they may well be better at it than we are, since our 40 to 50 percent divorce rate is evidence that we suck at it.

In AI we trust

As we get used to the ability of deep learning to come to conclusions more reliable than the ones our human brains come up with, the fields we preserve for sovereign human judgment will narrow. After all, the computer may well know more about our coworker than we do, and thus will correctly steer us away from the card with the adorable cats because one of our coworker’s cats just died, or because, well, the neural network may not be able to tell us why. And if we find we always enjoy Amazon’s top recommendations, we might find it reasonable to stop looking at its second choices, much less at its explanation of its choices for us.

After all, we don’t ask our calculators to show us their work.

Comments

comments

Google+