Making trained AI models available to applications through APIs or services for making predictions. Like opening a restaurant that serves dishes created from tested recipes.
Model serving infrastructure hosts a language translation model that applications can call via API to translate text in real-time.
All four provide managed endpoints to deploy trained models and expose them via HTTPS APIs for real-time (and in some cases batch/async) predictions, with autoscaling, monitoring, and access control.