Artificial Intelligence Is Your Next Sustainable Competitive Advantage. Let’s Start With The Black Box.

futurelab default header

For decades, companies have been searching for sustainable competitive advantages—but not many can claim that they have achieved this. Now, one big opportunity lies ahead of us and the race has only just begun. Artificial intelligence and deep learning is powering up the next level of digital transformation. Those who can transform their organization beyond simply adapting technologies and focus instead on changing the work culture and processes to being digital first will be in better positions. It has become apparent that AI may very well be part of the solution to achieving this. Think about it this way: the company is the algorithm; the algorithm runs the enterprise; AI powers the super algorithms.

Before jumping into the game and building complex models, I suggest you start with asking how the application of deep learning can shape your business strategy. This is normally a topic for which I need about 100 slides to explain, but I am simplifying it here. The relationship between a corporation and an algorithm is getting very tangled. Previously, the two had been completely separate; the systems were there to support decision-making by management. Today, deep networks are making decisions even when management is not at work, but it doesn’t mean that an algorithm can now replace them either. Is this the era of autonomous management?

Traditionally, deep networks are widely regarded as black boxes; no one can figure out what exactly is going on inside. The question is whether black boxes are truly uninterpretable, unlike logistic regression. The difference between statistical models like logistic regression and deep networks is the lack of interpretability and transparency in the latter. The challenge of misclassifying adversarially chosen examples is still largely unresolved.

An example is when logistic regression classifiers are applied during lung cancer diagnosis training. For a well-defined problem like this, we can assume that any model predicts the diagnosis with some degree of accuracy. Even in situations with known performance characteristics, some models may be considered interpretable while others may not. As such, interpretability, not accuracy, is the key to determining whether deep learning is an effective method to apply in a specific business or problem context. In some contexts—such as predicting insurance risks, fraud protection, and medical diagnosis—it can be very useful (or, interpretable). However, in situations that require managing very complex, unknown behavior, we still need a few more years to play with these models.

Consider if the models consist of a million features and the evidence for the prediction is spread out among ten thousand features. Scenarios like this emerge frequently, including while predicting urban driving conditions, equity behavior in financial markets, and the recreation of music in the personality of an 18th century composer. Oversimplification based on a few prominent features will support a robust model’s behavior. So, whatever massive data you’re planning to use to power up your AI strategy, first consider the interpretability (and not only the accuracy) of the model before assuming the black box can handle everything. Let’s start from here.

Read the original post here.