The Journey to Transparent AI

Disclaimer: This post is the authors point-of-view and not necessarily that of their employer, Accenture. This article was originally published on Dr. Escrig's personal LinkedIn page.

The adoption of AI will have a profound and positive impact on every aspect of our lives. Importantly in business, management will have the ability to shed “light” on “dark data”, data that is generated and stored, but doesn’t add any business value, and act on the new knowledge that is discovered from that data. Businesses will be able to personalize experiences, customize products and services, and identify growth opportunities with a speed and precision that has never been possible before.

According to a study conducted by PwC (Pricewaterhouse Coopers), 70% of business leaders believe that AI will provide significant business advantages in the future. Yet, 67% of CEOs think that AI and automation will have a negative impact on stakeholder’s trust in their industry over the next five years (1).

It is difficult to evaluate, mitigate and manage the ‘black box’ reputational and technological risks, of using such new and largely untried technology. In the report, Enterprise AI promise study: path to value, conducted in August 2017, almost half of the executives from 100 European organizations including banking, insurance, manufacturing, retail and government, said they distrusted so-called “black box” AI, in which a system cannot explain its results (2).

Only recently, when the industry has gained interest in AI, AI and Machine Learning (ML), which includes different configurations of Neural Networks, and Deep Learning, is been used interchangeable. The main reason is that ML has provided significant results with emerging and promising business value. See several examples in (3).

However, in the 60 year history of AI, Machine Learning has only represented one of many groups of technologies within AI. The pervasive use of ML as the only tool in AI, has brought several systemic challenges (4), the lack of transparency being the most prominent. An example of a hypothetic lack of transparency (5): Imagine that you are buying your very first home. You have a good stable job and carefully submit all your paperwork for a mortgage. Unfortunately, the bank rejects your mortgage application. You ask the banker why you were rejected and, with a blank stare, he simply shrugs, indicating he does not know why you were rejected – only that you have been rejected. This answer should not be accepted.

Some ML researchers have started to provide “Explainable AI” by accessing the parts of the Deep Neural Network (DNN) that have been fired by particular inputs or stimulus. An excellent example for self-driving cars has been developed by NVIDIA (6) - the ability to visualize the pixels of a scene that are more relevant for DNN after training, which match what a human would have selected as relevant aspects of the road while driving. However, by using only DNN, there is no way to see and record the “thinking” process of the self-driving car, which would give us real transparency, help us improve systems faster, and provide trust.

“Transparency”, as opposed to “explanation”, is the ability to have access to the logic behind a decision made by an AI system. It is absolutely necessary when recommendations provided by AI systems affect people, have high business risk, and need to be compliant with upcoming regulations (7). For example, the EU General Data Protection Regulation (GDPR), which comes into force in May 2018, includes a right to obtain an explanation of decisions made by algorithms and a right to opt-out of some algorithmic decisions altogether (8).

For real transparency, we need to review AI’s definition, instead of only relying on ML models, including:

  1. Perception – the process of understanding what is being observed.
  2. Sensor Integration – the process of integrating interpretations, when the same thing is being observed by different sensors.
  3. Reasoning – the process of inferring all the knowledge possible from the information perceived, what we call “thinking” and what gives us the real understanding of a situation. If we were learning without a prior effort of thinking, we’ll not understand why we are acting the way we do.
  4. Learning – when things repeat, we do not need to “think” anymore, we then do things “automatically”.

The AI technologies associated with Perception, Sensor Integration, and Reasoning are called Knowledge Representation and Reasoning (KRR), and the most used currently in the industry include Qualitative Representation, Graph Data Bases, and Ontologies.

The use of KRR technologies before using Learning techniques will provide real transparency and solve the challenges identified with the use of ML alone. We’ve coined the term “Holistic AI” as the integration of KRR with ML technologies:

Holistic AI = KRR + ML

Here’s one industry example of the application of KRR: A new DNA sequencing chip based on Nanopore technology was developed. The promise of the chip, once fully developed, was that it would be: 1) Very inexpensive ($50 per chip, valid for sequencing the DNA of one person) 2) Very stable compared to other Nanopore technologies , and 3) Have the ability to sequence DNA in near real-time.

The time series signal obtained at different runs of the chip was unstable (peaks with different amplitude and number of points with the same qualitative shape). Manually, they were able to classify peaks with 30% accuracy, which was a promising beginning, but at this point they had to have an automatic solution with higher accuracy to secure funds for continued development. ML models were unsuccessfully used twice.

Our solution: Qualitative Modeling was used to capture the fundamental features of each peak of information, while eliminating quantitative details. Three qualitative angles and the relative length of the segments of each peak defined the qualitative representation of the information. A qualitative comparison of peaks was used to do a non-supervised classification of peaks into DNA bases, returning quantitative similitude measurements.

The results: With only a few very unstable data samples, we were able to call DNA bases with 84% accuracy. The accuracy was tested with regular DNA sequencing methods. The transparency was obtained through double testing: 1) Prior to the implementation, its soundness is theoretically demonstrated through First-Order Predicate Logic; 2) After deployment, the logic behind the model is intuitively visualized in such a way that failures and why they occur are easily identified.

If this chip is going to be the DNA sequencer of the future, wouldn’t it be more trustworthy if you could see how the sequence has been generated?

There are many industry applications that do not require transparency – they provide value without any further personal or business consequence, such as: preferred movies on Netflix, related products in Amazon, or similar clothing outfits to the one you like.

However, there are many other industry applications where transparency will be soon mandatory, otherwise the recommendation provided by the AI will not be accepted i.e.: financial services - loan approvals, regulation governance, cyber security decisions, criminal suspects, self-driving cars, automatic farming decisions.

If you are a company in any of these industries, you might be asking yourself the question: How do I get started? The simple answer is “by owning your data”, i.e. storing it in the KKR technology that best fits your business case. You will get insights from the first data sample, and be able do in the future any ML type-of-AI that the ROI justifies it. Most importantly, you will always have the required transparency in every decision provided by the AI system for your own business decisions or those of your clients. You may make mistakes in your journey, but the sooner you start, the sooner you will reap the benefits

References

  1. 20 years inside the mind of a CEO…What’s next?’, 20th Global CEO Survey, PwC, 2017 (ceosurvey.pwc)
  2. http://www.computerweekly.com/news/450428370/AI-adoption-still-nascent-says-SAS-survey
  3. https://www.techemergence.com/ai-in-business-intelligence-applications/
  4. https://www.linkedin.com/pulse/modern-ai-intelligent-enough-teresa-escrig-phd/?trackingId=xw28o69jmE9sQTjBlXOZQg%3D%3D
  5. https://www.accenture.com/us-en/blogs/blogs-why-explainable-ai-must-central-responsible-ai
  6. Bojarski, M., et al. “Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car”, NVIDIA, April 2017 - https://arxiv.org/pdf/1704.07911.pdf
  7. https://events.technologyreview.com/video/watch/peter-norvig-state-of-the-art-ai/
  8. http://www.techzone360.com/topics/techzone/articles/2017/01/25/429101-eus-right-explanation-harmful-restriction-artificial-intelligence.htm#


Interested in submitting an op-ed to Bracket? Click here to know how.

This site uses cookies to provide you with better user experience and analyze the site traffic. By using our website, you accept our use of cookies.