Date Published: 23 August 2022

Explainable AI (XAI) for all Stakeholders

What is Explainable AI?

Artificial Intelligence (AI) is transforming lives by improving services that we rely upon—ranging from voice assistants, self-driving vehicles, facial recognition and intelligent home appliances. The primary benefits of AI are quick and accurate decisions making and the ability to operate 24/7. However, AI has a few disadvantages, such as being a black box and potentially unfair and biased. Explainable AI makes AI understandable and transparent. According to IBM, explainable AI refers to processes and methods that allow users to comprehend and trust the result and output created by machine learning algorithms. Further, it helps identify the biases and interpret the complex models for each stakeholder.

Why is Explainable AI (XAI) important?

According to a 451 research survey by the enterprise voice in 2020, 92% of enterprises believe that XAI is necessary. However, less than half of the enterprises have explainable AI tools. It shows that enterprises are exposed to significant AI risks, such as having an AI application whose behaviour and decisions are difficult to explain, which may lead to unfavourable outcomes – potentially damaging to organisations, society and individuals. To mitigate this risk, regulators are introducing AI regulations requiring Explainable AI. European Commission Artificial Intelligence Act (EC AIA) mandates Explainable AI for all high-risk AI applications, so companies must provide stakeholders with an explanation for AI-based decisions. The benefits of having explainable AI go beyond achieving regulatory compliance, and they include but are not limited to:

Reduce mistakes

AI can make wrong decisions, and the black box models limit the ability to detect and understand the errors. XAI helps reduce mistakes by reducing wrong predictions, especially in consumer-sensitive sectors such as medical, finance or criminal justice.

Reduce model bias

AI bias is a significant problem, and recent high-profile examples include Apple Credit Card, US hospitals’ racial biases, and gender bias by autonomous cars. Explainable AI can help reduce AI bias by explaining decision-making and ensuring non-prevalent biases.

Informed decision-making

One of the primary roles of machine learning models is automatic decisions making. Hence, XAI helps to simplify the decision and identify the factors why a particular decision was made. Enabling AI stakeholders to understand the reasoning behind a decision and potentially trust the rationale and AI itself.

Uses of Explainable AI

Explainable AI can encourage transparency, holding AI developers accountable for providing meaningful explanations and mitigating risk to the organisation, users, and society. The objective of the XAI depends upon the stakeholder, such as the developer, domain experts, users, and regulators have different needs about explainable AI. A study conducted in 2020 on XAI reveals that 20 organisations are using XAI for specific purposes like feature selection and identifying false correlations – rather than exploring its full potential. Though XAI was supposed to be used for high-level concepts and reasoning, most organisations lacked clarity and objective on XAI. Similarly, Liao et al. surveyed 20 UX and designer practitioners of AI working at IBM using XAI and were interviewed to identify the existing gaps between the current XAI and practices. The study found that due to the inadequacy of XAI, it is not easy to meet the expectations of various stakeholders’ organisational goals, including privacy rights risks and providing users with continuous integration.

So, to bridge the gap, it is crucial to integrate XAI into the AI ecosystem by involving various AI stakeholders, bringing diverse perspectives, and defining and meeting XAI objectives from data scientists, AI project managers, risk and compliance managers, and ethics lead. Hence, it is crucial to collaborate with various stakeholders to build a diverse AI application that helps deliver diverse outcomes along with explainable AI. Seclea is designed to be a platform all AI stakeholders can access to monitor and track AI applications in development and deployment. Helping demystify AI, increase trust in AI and ensure AI does not negatively impact an organisation, individual or society.

Who are the concerned stakeholders in building Explainable AI? 

AI stakeholders can be generally categorised into the following:

AI Developers

Many industries, from large-scale companies to small-scale enterprises, build AI applications for various reasons, such as helping public sectors and medical applications to academics and researchers. So, developers play a significant role in quality assurance through system testing, debugging and evaluation and improve the robust application. So, developers are key stakeholders that can build explainable AI as per the need to understand the company’s objective and the existing regulations in both organisation and country.

Theorist

Theorists are another critical stakeholder that helps to understand and advance AI technology which allows the building of XAI that can help better understand the deep neural network. Further, theorists suggest how the data can be interpreted, such as visualisation or a new kind of cognitive assistance to understand a complex problem. So, theorists could play the system creator role for XAI.

Ethicists or AI regulators

Ethicist stakeholders include policymakers, journalists, data scientists, lawyers, economists and politicians who come under this stakeholder, ensuring that AI applications are fair, accountable and transparent. Further, the ethicist communities go beyond the robustness of AI applications, including legal compliance and certification of AI systems. AI auditing also comes under this ethicist for models’ safety, ethics, and privacy.

Risk Managers

AI poses risks to an organisation, individual and society. Respective organisations must ensure that AI risk is effectively managed when they develop or adopt an AI application. As AI applications are dynamic and can potentially evolve – AI risk management also has to be dynamic to ensure real-time monitoring and tracking of the AI risk profile of an application for its entire lifecycle.

All Users or Consumers

The final stakeholders are the users of the AI application and comprise people who use the AI application. Such user stakeholders need an explanation to help decide how the system works and justify its action. This stakeholder involves both company (hands-on), its beneficiaries (end users) and everyone involved during the process. For example, if an insurance company decides to provide a loan based on the machine learning models, both company and users need to understand why the machine took such a decision, so the company needs to explain to its user.

It is vital for any organisation using AI to have XAI. Whose role and objectives are clearly defined by the AI stakeholders. Explainable AI must make sure that AI stakeholders fully understand AI decision-making processes. Thus, it is crucial to understand their objective and requirement for building transparent, explainable AI. Explainable AI creates responsible AI by building trust through maintaining communication and the involvement of various stakeholders.

How to implement Explainable AI for all stakeholders?

Seclea Platform is cross-functional with features that AI stakeholders can access and use—building the bridge between different AI stakeholders and the data science team. This enables the data science team to focus on their core function, and Seclea keeps all AI stakeholders informed about AI projects and its risk, regulatory compliance and transparency. 

Seclea provides tools for AI stakeholders to ensure AI application’s decisions, behaviour, and evolution can be explained with traceability. We are here to help; email us at hello@seclea.com or fill out the short form.