Making trained AI models available to applications through APIs or services for making predictions. Like opening a restaurant that serves dishes created from tested recipes.
Model serving infrastructure hosts a language translation model that applications can call via API to translate text in real-time.
All providers offer managed endpoints that host models and expose HTTPS APIs for inference. ML platforms (SageMaker/Azure ML/Vertex AI/OCI Data Science) focus on deploying your trained models with scaling, monitoring, and versioning. Foundation-model services (Bedrock/Azure OpenAI/OCI Generative AI) provide hosted models you call via API without managing the underlying model servers.