AI Governance in a Hybrid Multicloud World

Leaders of enterprises creating AI services are being challenged by an emerging problem of how to effectively govern the creation, deployment and management of these services, throughout the AI lifecycle. These enterprise officials want to understand and gain control over their processes to meet internal policies, external regulations or both. This is where AI governance makes a difference.

Successful AI requires governance, compliance with corporate and ethical principles, laws, and regulations. Noncompliance can cost an organization millions of dollars in fines, as governments around the world enact increasingly stringent AI regulations – and these can vary by country.

AI Governance as part of a Data Fabric.

AI governance should be part of your Data Fabric, as it requires the ability to direct, manage and monitor the AI activities of an organization. Typically, data and applications are dispersed across a wide range of computing platforms, involving data lakes, enterprise data warehouses, transactional systems, spread across multiple hyperscalers’ cloud implementations and on premises.

Business leaders of organizations need to focus on AI regulations and are legally required to provide trust and transparency across each stage of the AI lifecycle. Failure to offer this transparency can lead to seven-figure fines and penalties. With this, AI models can no longer function as a mystery. Enterprise leaders must provide greater visibility into their automation processes and provide clear documentation of the health and functionality of their models in order to meet regulations.

AI governance is an overarching legal framework that uses a set of automated processes, methodologies, and tools to manage an organization’s use of AI. Consistent principles guiding the design, development, deployment, and monitoring of models are critical in driving responsible/trustworthy AI. Three of the key principles shown in figure #1 are:

  • Know your model: Model transparency starts with the automatic capture of all the information on how the model was developed and deployed. This includes capturing of the metadata, tracking provenance, and documenting the lifecycle. Explainable results are critical in building public confidence, promoting safer practices and facilitating AI adoption.
  • Trust your model: Define enterprise policies, standards and roles. Automatically enforce them to improve compliance and validate models now and over time for retraining and reliability.
  • Use and explain your model: Analyze model performance against KPIs while continuously monitoring in real time for bias, fairness, and accuracy. Share models and documentation across the enterprise, providing backup for analytic decisions.

Figure #1: Three key principles of AI Governance.

AI Governance Walkthrough

  1. Once the model proposal has gone through the appropriate approval process, a model entry is created in the Model Inventory. The entry will be continuously updated with new information.
  2. The data scientist uses the tool of their choice to develop the model. Training data and metrics from a number of popular open-source frameworks are automatically captured and saved to the model entry. Custom information can also be saved.
  3. When the pre-production model is evaluated for accuracy, drift and bias, the performance metadata is captured and synced.
  4. The model is reviewed and approved for production.
  5. In the preferred platform, the model is deployed and once again the relevant meta-data is captured and synced.
  6. Lastly, the production model is continuously monitored, and the performance data captured and synced as well.

A dashboard provides a comprehensive view of the performance metrics for all models, allowing stakeholders to proactively identify and react to any issues.

Use Case Scenarios

#1. Reducing model deployment time.

Data scientists take a month or two to build an AI model. During this process, they try out multiple models using different frameworks and different datasets. They finally select one of the models and send it to the model validator (Model Risk Management team). The model validation team is typically small and takes a month or two to get to doing that validation. They need additional information such as (1) Why was a deep learning model built? Did the data scientist try out a simpler model like decision tree? (2) Why was a specific data set used? Did they try out the alternative data set? 

When such questions are asked to the data scientist, they do not have the answers immediately available. They need to dig up their notes from 2-3 months back to figure out answers and then respond. The response can lead to more questions from the model validation time. This back and forth between the MRM team and the data scientists leads to a lot of delay in the model being deployed to production.

IBM’s AI Governance technology is designed to automatically extract metadata during model development time and store it. The model validator can simply go to the factsheet for the model use case and find answers to the questions that they have. This avoids the need for them to reach out to the data scientists which reduces the time to deploy the model to production.

#2. Eliminate bypassing of best practices when developing and deploying models

When a data science team gets a request to create a new model, they will typically try to see if they already have a model which solves a similar/same use case. If they find such a model, they may decide to reuse the model which is already running in production. This can lead to violations of risk management best practices that have been defined in the organization. 

For example, if the model running in production is being used for clients in the US and the new use case is for making similar decisions for clients in Europe, the kinds of fairness checks that need to be done for models to be used for European clients will be different than those for other clients. This leads to violation of risk management best practices. 

IBM’s AI Governance technology provides customized workflows which allow enterprises to define the workflow as per their internal risk management best practices. If the model is being used for European clients, the workflow will have a step to check for fairness of the models. If the fairness is below the threshold, then the workflow will fail and the model will not be deployed to production. 

#3. Continuous model monitoring to ensure regulatory compliance

Organizations are increasingly using AI for creating short lists of resumes when hiring for a specific role. Such models are typically built without using attributes such as gender, ethnicity, etc., as features of the model. When the model makes a decision, it does not know the gender or ethnicity of the candidate. However, this does not guarantee that a model is free from unfair bias; frequently, other included features are strongly correlated with gender or ethnicity, so models built on unfairly biased training data will continue to show that bias, even without the removed attributes.

For example, in the United States, male applicants have historically requested higher salaries than female applicants. If an organization has traditionally hired men to fill a specific role, then the typical successful applicant in their training dataset will both be male and request a high salary. When gender is removed, the strongly-correlated salary request feature will remain, and models built from the dataset will be unfairly biased towards men, even with gender excluded from the model.

IBM’s AI Governance technology provides a methodology called Indirect Bias to compute fairness of the model, even if attributes such as gender and ethnicity are not used as features. IBM has also incorporated technology from IBM Research to analyze the model for unfair bias at development time, helping data scientists and model owners detect potential issues before the model is put into production.

Conclusion and Next Steps

With enterprise infrastructures embracing a combination of on premises and hybrid multicloud environments from multiple hyperscalers, AI governance needs to take an holistic platform approach to drive responsible, transparent and explainable artificial intelligence workflows.

IBM Cloud Pak for Data is designed to deliver this as part of its data fabric capabilities which can help organizations:

  • Catalog and monitor AI models with metadata capture, success identification, and the ability to determine remediation initiatives.
  • Automate, identify, monitor and report on facts and workflows at scale, mitigating bias and drift to manage AI risk.
  • Translate external AI regulations into policies for automated enforcement. Use customizable dashboards to improve stakeholder collaboration.

In turn, the benefits of AI Governance enable organizations to: 

  • Trace and document the origin of datasets, models, associated metadata and pipelines at scale.
  • Monitor AI models for fairness, bias and drift that automatically identify the need for correction.
  • Use protections and validation to ensure machine learning (ML) models in production are fair, transparent and explainable.
  • Use automated and collaborative tools to shorten processes and increase AI lifecycle visibility.
  • Increase efficiency and create balance across people, processes and AI technologies.

For more information on how to get started on your AI Governance journey visit the following links:

%d bloggers like this: