The practices, tools, and policies for managing AI and machine learning models responsibly throughout their lifecycle — from development through deployment and retirement. Like quality control and compliance for AI, model governance ensures models are fair, explainable, accurate, and compliant with regulations. This includes bias detection, model explainability, audit trails, version control, performance monitoring, and responsible AI principles. AWS offers SageMaker Model Governance, Azure provides Responsible AI tools, GCP has Vertex AI Model Monitoring and Explainability, and OCI offers OCI Data Science model management capabilities.
A bank deploying a loan approval model implements model governance by running automated bias tests across demographic groups, logging every prediction with its reasoning for regulatory audits, setting up drift detection alerts to catch accuracy degradation, and requiring human review before any model update goes to production.
All four clouds provide building blocks for model governance: model registries/catalogs for versioning and approvals, monitoring for drift/performance, and tools for explainability and responsible AI. AWS centers governance around SageMaker’s registry and monitoring features; Azure emphasizes Responsible AI tooling integrated into Azure ML; GCP provides monitoring and explainability within Vertex AI; OCI provides model cataloging, deployment management, and operational monitoring capabilities within OCI Data Science.