Currently, much of machine learning is opaque, a genuine black box. Machines may be given an extraordinarily complex task and come up with what appears to be a reasonable solution, but they are largely incapable of explaining how or why they came up with that solution. That’s why some of the smartest AI researchers in the industry are now hot on the trail of finding new ways to make machines understandable for humans.
Much of that focus is on an emerging field known as Explainable AI (XAI), which in very simple terms is the ability of machines to explain their rationale, characterize the strengths and weaknesses of their decision-making process, and, most importantly, convey a sense of how they will behave in the future. This field of XAI is going to be hugely important, with a number of important social, legal and ethical implications.
Take the example of the autonomous vehicle, better known as the “self-driving car.” There’s a tremendous amount of machine learning that goes into being able to navigate major roads, highways, side streets and smaller roads that characterize a complex transportation grid. Cars first have to understand all the “rules of the road.” Then they have to be able to think one step ahead, to anticipate actions of others on the road. And then they have to be able to react on the fly to external stimuli – pedestrians, stoplights, other cars, traffic signs, obstacles, and barriers.
But then there’s what some AI researchers refer to as the “goat on the road problem.” In short, how will a self-driving car fare when it encounters a situation that it has never seen before, or even contemplated before – a goat on the road? Will the car treat the goat as a pedestrian, as another vehicle, or as an obstacle on the road? How the driverless car treats this goat has massive real-world implications – causing the car to stop, slow down or maybe even speed up. In one of those scenarios, the world may have one less goat.
In a best-case scenario, researchers would be able to get the driverless car to explain its actions later – to explain the exact steps and decision-making process that led to it acting the way it did. That would help the researchers to “debug” the problem, and to help the driverless car learn to recognize future goat-sized animals on the road in the same way.
That’s the true power of XAI – helping teams of researchers check and debug a machine over time so that they can anticipate how it will act in the future. There are plenty of business applications for this. One use case is the development of new pharmaceuticals. How can teams of medical researchers trust machines to create and synthesize the right compounds? What specific medical journals did these machines use to make their decisions?
Or, consider the use of AI-powered machines to help Wall Street firms trade stocks and other financial instruments. What if automated trading systems start building a massive position in a stock, against everything that the market appears to be predicting? If you were the head of the equity trading team, you’d expect those machines to be able to explain how they came to that decision. Maybe they discovered a market inefficiency that nobody has noticed yet, or maybe they are getting better at anticipating the moves of other rival Wall Street firms. But when millions of dollars are potentially at stake, you want to make sure that a bunch of machines are trading your money wisely.
With so many different approaches to machine learning – neural networks, complex algorithms, probabilistic graphical models – it’s getting increasingly difficult for humans to figure out how machines are coming to their conclusions. However, with the advent of XAI, they might just be one step closer to making machines accountable for their actions, just like humans are.