Date Published: 25 August 2022

Who is liable if the Machine goes wrong?

When AI goes wrong!

Artificial Intelligence (AI) is becoming part of our daily lives, such as smart home appliances, self-driving cars, navigating devices, smart watches etc. The application of the technology has stretched across various sectors such as defence, technology, household, entertainment, education, environment, farming, finance and many more. However, AI has negatively impacted individuals, society, and organisations recently. AI is the future, but who is accountable when things go wrong, as inevitably they do? Most organisations struggle to build explainable and fair AI applications – let alone have a transparent and accountable AI. However, transparency and accountability are essential to ensure AI’s future success and compliance with global regulations.

AI makes companies realise the technology could be vulnerable to “garbage in, garbage out”. Especially in self-learning technology, the meaning of “garbage” is that feeding biased data into the machine could lead to biased AI behaviour and decisions. AI bias might discriminate against groups based on gender, race, sex and colour. Here are a few examples of how bias impacts AI:

Financial sector: In 2019, Apple credit cards were found inherently biased against women over men by giving different credit limits based on gender. The algorithm used in the model is used to filter applicants, which doesn’t consider the characteristics of applicants and produces a disparate result.

Health sector: In 2019, A popular algorithm used by many large U.S.-based health care systems to screen patients for high-risk care management intervention programs found their system was biased against black patients for lower spending on black patients than similarly diagnosed white patients. Currently, 17.7% of black patients receive additional attention, and 46.5% of patients would get medical treatment if the algorithm weren’t biased.

Educational sector: In 2020, during the 2019-2020 covid period, UK education developed a grade standardisation algorithm for exams they would calculate a whole cohort of students’ end-of-school qualification results by algorithms based on historical school performance data (rather than trusting teacher-assessed grades supplied by schools) which shows the poorer result for students from disadvantaged backgrounds and more inferior government schools.

Justice and criminal sector: AI tools and systems have generated discriminatory criminal justice outcomes and do not consider innocent people. According to a report published level of about 81%.

Why is AI Accountability Crucial?

The question is, who is responsible when machines make wrong decisions? If AI is not made accountable, it could threaten human life, too. Thus, how can we make AI liable? Accountability could help AI be responsible and liable for any harm caused during the implementation of AI tools. According to Organisation for Economic Co-operation and Development (OECD), the term (AI) ‘Accountability‘ refers to the expectation that organisations or individuals will ensure proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, by their roles and applicable regulatory frameworks, and for demonstrating actions and decision-making process by being fair.

According to IBM, AI accountability refers to everyone involved in creating the scope, designing, deploying and managing that AI is accountable for anything that goes wrong in the machine.  In GDPR, the principle of accountability is linked with transparency and authorised to hold data and make them responsible for handling personal data. A similar principle is also applied in the EC Artificial Intelligence Act (EC AIA). So, accountability can be ensured by making AI transparent, explainable and auditable. We defined these concepts as:

AI Transparency: Collect, collate, analyse and present all activities related to an AI project. A complete provenance is kept and given to AI project managers and product owners to monitor what’s happening and ensure responsible AI practices are followed.

Explainable AI: Explain how an AI model evolved to make a particular decision or exhibit a behaviour. Rather than just focusing on the final stage of the model, Seclea provides deeper insights into why the model is in this state. A valuable perspective to understand model evolution and its behaviour.

AI Auditability: Seclea provides tools to investigate an AI project, model, and decisions from many perspectives. Our provenance information provides the human activities that influenced models’ behaviour. Explainable AI traceability offers the impact of human influence on model learning, leading to its final behaviour and decisions (preferences).

Moreover, AI can promote accountability within individuals, organisations and society by ensuring the following:

Regulate AI in an organisation through governance

Governance can significantly promote AI accountability by helping manage risk, demonstrating ethical values, and ensuring compliance. With clear goals and objectives of AI systems, clearly defined roles and responsibilities in an organisation with a multidisciplinary workforce capable of managing AI systems can help an organisation be accountable. Besides, it is also essential for the organisation to document the technical specifications of the particular AI system, compliance, and stakeholder access to system design and operation information.

Ensure data and models are robust

Understating the data plays a vital role in reducing bias. Documenting and checking the source, quality, and data collection process is essential. The reliability and representativeness of the data need to be examined, including the potential for bias. Accountability in the data also includes data security and privacy.

Ensure decisions are fair and transparent and explain the decision to the consumer

Organisations and businesses must prohibit credit discrimination based on race, colour, religion, national origin, sex, marital status, age, or because a person receives public assistance. Furthermore, the algorithm needs to be rigorously tested. Also, the companies or the firm should be liable to explain AI decisions that affect their lives, such as disease diagnosis or why a credit card request was denied. Finally, the developer company or the implementor of the AI application needs to take responsibility for anything wrong with the AI applications.

Ensure the performance Assessment of AI

During the deployment of AI, the AI system should have a purpose and definition of performance metrics and the methods used to assess that performance. While monitoring such metrics, the assessment team must ensure whether the application meets the intended objectives or not. Therefore, these performance assessments must take place at the broad system level and focus on the individual components that support and interact with the overall system.

Ensure continued monitoring and evaluation of AI applications

AI system needs to be monitored continuously to ensure that the application as though a machine performs automation on a specific task. The human loop should ensure that the system produces the expected results. Also, the applications must be revisited and updated with increasing technological advances.

It is essential to build trust in AI to make it responsible, which is only possible when the machine is fair, transparent, and accountable. Consequently, it is essential to check biasness during the planning, design and deployment of AI applications. In addition, businesses should set up a complaint mechanism and get compensation for those harmed by the applications. Such steps will encourage AI to be responsible by being accountable for their actions.

What next?

AI applications can be a significant source of organisational risk, but making AI transparent and accountable shouldn’t be complicated.

Seclea provides tools for your AI stakeholders (Data Scientists, Ethics Leads, and Risk and Compliance Managers) to work together at every AI lifecycle stage to ensure your organisation has full AI transparency and can audit its AI applications effectively in a few simple steps. We are here to help; email us at hello@seclea.com or fill out the short form.