Machine learning is transforming the world, from healthcare, manufacturing, financial, and scientific discovery to cybersecurity. The major driving force for machine learning adoption is the increase in productivity. According to some estimates, by 2030, all aspects of technology, from cyber to physical, will have some component based on machine learning algorithms.
Machine learning, or Artificial Intelligence (AI), has significant benefits for the commercial sector and society. However, this relentless pursuit of building AI solutions for every problem and organisations trying to gain a competitive edge should be counterbalanced with caution. AI has its benefits, but they come with significant risks. In their 2019 annual reports filed with the Securities and Exchange Commission, Google and Microsoft have added warnings to their “risk factors” for investors relating to potential legal and ethical problems from their AI projects. The risks posed by AI include traceability, bias, privacy, transparency, security, and accountability, as described below:
- Traceability: Tracking the actions taken by humans or machines during the lifecycle of an AI application;
- Bias: Discrimination arising from the choice of training data, design decisions and training evolution;
- Privacy: Data sourcing and use of data by the AI algorithm might violate data privacy requirements;
- Transparency: Explanation of the behaviour and decisions of advanced AI algorithms;
- Security: Ensuring AI is safe and protected at all lifecycle stages;
- Accountability: Ability to audit an AI algorithm and its decisions to identify the root cause of an issue.
Regulatory authorities and standardisation bodies have proposed regulations, standards, and guidelines for developing and adopting AI solutions that have four things in common: fairness, traceability, transparency, and accountability. Most of the regulations and policies have taken a risk-based approach. If your AI application poses risks to individuals and society, it must abide by the relevant rules and standards.
According to the proposed EU’s Artificial Intelligence Act (AIA), any applications that pose high risk must ensure that they are compliant with AIA. Failure to do so has a higher penalty than GDPR non-compliance – with fines rising to €30 million, or 6% of global revenue. AIA defined high-risk applications as “… AI systems that are creating an adverse impact on people’s safety or their fundamental rights are considered high-risk. To ensure trust and consistent high level of safety and fundamental rights protection, a range of mandatory requirements (including a conformity assessment) would apply to all high-risks systems.”
Cybersecurity controls, especially automatic incident detection and response, intrusion detection and response, and firewalls are listed in the AIA. A simple rule of thumb to identify whether your cybersecurity application falls under the high-risk definition of AIA is to determine what information it uses and who gets affected by its decisions. Suppose the data used during training or deployment relates to humans, or AI decisions might affect humans or their ability to perform a task. In that case, your application is probably a high-risk AI application. Even if your application cannot be clearly defined as a high-risk or low-risk application, the recommendation of EU-AIA is to aim for compliance, thereby safeguarding your AI investment and minimising any potential future compliance issues.
To de-risk AI development and adoption, it is necessary to start by defining, managing and monitoring a robust Governance, Risk Management and Compliance (GRC) strategy. Organisations are typically well-experienced in GRC practices for their business and IT functions. However, treating AI as just another IT element is not recommended. AI requires contributions and oversight from diverse segments of an organisation and people with varied backgrounds and experiences. For example, AI has increased the demand for ethical expertise to ensure AI is developed and used ethically. Similarly, from a technology perspective, AI development and operations are different from standard software development, so AI GRC requires different strategies and tools.
From the strategy perspective, organisations can define and manage responsible AI practices to ensure fair, transparent and explainable AI solutions are used. From a technology perspective, organisations need a toolset that ensures all activities arising during an AI algorithm’s lifecycle are analysed, managed and verified to ensure that they follow responsible AI guidelines and any relevant regulatory requirements.
Seclea (seclea.com), an ISG-RHUL spinout, provides an explainable and responsible AI platform to develop and use AI applications with fairness, traceability, transparency, and accountability. Organisations can manage their risk and regulatory compliance with the Seclea Platform and provide relevant information to all AI stakeholders. This enables all parties to jointly work towards building a better solution that benefits the organisation involved and the wider society.
Seclea’s Platform integrates with the AI development and deployment pipeline, whether on-cloud or on-premise, with little friction. It allows data science and ML engineers to design, code, train, and evaluate the best AI solutions for a problem. Seclea works in the background, analysing the development activities and their potential impacts on fairness, explainability, risk profile and regulatory compliance. AI stakeholders, including data scientists, project managers, ethics leads, risk managers and auditors, can use the Seclea Platform to oversee an AI project. The ultimate goal is to enable a collective oversight and efficient management of AI risks.
What next?
AI applications can be a significant source of organisational risk, but managing AI Risk and Regulatory Compliance does not need to be complicated.
Seclea provides tools for your AI stakeholders (Data Scientists, Ethics Leads, and Risk and Compliance Managers) to work together at every AI lifecycle stage to manage AI risk and achieve regulatory compliance effectively in a few simple steps. We are here to help; email us at hello@seclea.com or fill out the short form.