The cloud platform built for developers and growing businesses. Simple, scalable infrastructure with AI-native services — featuring 42% lower inference cost and 2× throughput for AI workloads.
Cloud computing is the delivery of computing resources — servers, storage, databases, networking, software, analytics, and intelligence — over the internet, on demand, with pay-as-you-go pricing. Instead of owning and operating physical hardware in your own data centers, you access technology services from a cloud provider only when you need them. This eliminates large upfront capital expenditures, replaces them with low variable costs, and lets organizations scale instantly in response to demand spikes. The three primary service models are Infrastructure as a Service (IaaS, e.g. virtual machines), Platform as a Service (PaaS, e.g. managed databases), and Software as a Service (SaaS, e.g. Gmail). When would you use cloud computing? Almost always — cloud is the default choice for new applications because of its pay-as-you-go pricing, global reach, managed services that reduce operational overhead, and the ability to scale from zero to millions of users without upfront hardware investment. The main exception is organizations with strict data sovereignty requirements or very predictable, high-utilization workloads where owning hardware can be cheaper (the 'buy vs. rent' crossover point). Common mistakes: treating the cloud as a simple lift-and-shift of on-premises architecture (you lose most of the cost and scalability benefits), underestimating egress costs for data-heavy workloads, and skipping cloud-native services (managed databases, queues, caches) in favor of self-managed equivalents that require more operational effort.
Example: Instead of buying expensive servers, Netflix uses cloud computing to stream videos to millions of pe...
Kubernetes (often abbreviated K8s) is an open-source container orchestration platform originally developed by Google that automates the deployment, scaling, and management of containerized applications across a cluster of servers. The core building block is the Pod — one or more containers that share a network and storage. Deployments manage how many Pod replicas run and handle rolling updates; Services expose Pods to network traffic with stable addresses; ConfigMaps and Secrets inject configuration without rebuilding images; and Horizontal Pod Autoscalers scale replica counts based on CPU or custom metrics. Every major cloud provider offers a managed Kubernetes service: AWS EKS, Azure AKS, Google GKE, and OCI OKE. When would you use Kubernetes? Kubernetes is appropriate when you're running multiple containerized microservices that each need independent scaling, when you need zero-downtime rolling deployments, when you're managing workloads that benefit from declarative infrastructure (desired-state management), or when you need portability across cloud providers. For simple single-service deployments, ECS Fargate, App Service, or Cloud Run may be simpler alternatives. Common mistakes: treating Kubernetes as a simple 'containers on a server' solution without understanding its operational complexity (networking, storage, RBAC, and observability all require deep investment), using latest image tags (always pin to a specific digest for production workloads), not setting resource requests and limits on containers (unset limits cause noisy-neighbor problems and unpredictable scheduling), placing stateful databases inside Kubernetes without understanding Persistent Volumes (many teams use RDS or managed databases instead), and skipping RBAC configuration (every workload should have the minimum permissions needed via dedicated ServiceAccounts).
Example: An e-commerce platform runs its checkout service as a Kubernetes Deployment with 3 replicas. During ...
AWS's fully managed Windows-native shared file storage service, delivering SMB (Server Message Block) file shares backed by Windows Server and integrated directly with Microsoft Active Directory. Where EFS is purpose-built for Linux NFS workloads and FSx for NetApp ONTAP supports NFS, SMB, and iSCSI simultaneously for mixed environments, FSx for Windows File Server is the right choice when your applications, users, or services expect a native Windows file share — one that honours AD user permissions, supports DFS (Distributed File System) namespaces for multi-share aggregation, and exposes Windows ACLs and shadow copies without any translation layer. It deploys in a Multi-AZ configuration with an active standby file server and automatic failover, and integrates with AWS Backup for scheduled, policy-driven backups. Throughput capacity and storage can be scaled independently without downtime, and every file system is encrypted at rest with AWS KMS. DFS namespaces let you group multiple FSx file shares under a single unified path, making it straightforward to migrate on-premises Windows file servers to AWS and keep legacy UNC paths intact. The closest equivalents on other clouds are Azure Files (for SMB workloads on Azure) and OCI File Storage. Use FSx for Windows File Server when you need a fully managed SMB share with native Active Directory integration for Windows-based workloads; use EFS when your workloads are Linux-only and NFS-based; use FSx for NetApp ONTAP when you need multi-protocol access or ONTAP-specific data management features across both Windows and Linux.
Example: A professional services firm migrates its on-premises Windows file server to FSx for Windows File Se...
Relational Database Service - AWS managed database service that handles maintenance, backups, and scaling. Like having a database administrator without hiring one.
Example: An e-commerce site uses RDS for their product catalog database, with AWS handling all backups and up...
Storage that divides data into fixed-size blocks, like a traditional hard drive. Like having a parking garage with numbered spaces - each space is the same size and has a specific address.
Example: AWS EBS provides block storage for databases and applications that need the performance of a local h...
Technology that lets you access and control a computer from another location over the internet. Like having a virtual window into another computer where you can see its screen and control its mouse and keyboard from anywhere. Cloud services like AWS WorkSpaces, Azure Virtual Desktop, and Google Cloud Workstations provide fully managed remote desktops that employees can access from any device.
Example: A software company provides their developers with AWS WorkSpaces so they can work from home, coffee ...
DigitalOcean is a cloud computing platform built for developers and startups, offering simple and cost-predictable infrastructure across compute (Droplets), managed Kubernetes (DOKS), serverless functions, managed databases, object storage (Spaces), and an AI/LLM platform (GenAI Platform). Known for its developer-friendly UI, transparent flat-rate pricing, and faster onboarding compared to hyperscalers like AWS, Azure, and GCP.
Example: A SaaS startup runs their entire stack on DigitalOcean: the API on App Platform, the database on Man...
DigitalOcean's term for a virtual machine (VM) in the cloud. Droplets are scalable Linux VMs that range from shared-CPU basic plans to CPU-optimized, memory-optimized, and GPU-accelerated configurations. They are the fundamental compute unit in the DigitalOcean ecosystem and can run any Linux workload.
Example: A startup deploys their Node.js API on a $12/month Basic Droplet (2 vCPU, 2 GB RAM) in the NYC3 regi...
DigitalOcean GPU-accelerated virtual machines equipped with NVIDIA H100 or A100 GPUs, designed for AI/ML training, inference, and other GPU-intensive workloads. GPU Droplets provide on-demand access to enterprise-grade GPUs without the capital expense of owning hardware.
Example: A machine learning team uses GPU Droplets with NVIDIA H100 80 GB GPUs to fine-tune a large language ...
DigitalOcean Kubernetes Service — a fully managed Kubernetes cluster offering. DOKS handles control plane management, upgrades, and HA automatically, so teams focus on deploying workloads rather than managing Kubernetes infrastructure. The control plane is free; customers pay only for worker Droplets.
Example: An e-commerce platform migrates from a single Droplet to DOKS to handle variable holiday traffic. Th...
DigitalOcean's fully managed Platform-as-a-Service (PaaS) for deploying web applications, APIs, and static sites directly from a Git repository. App Platform automatically builds, deploys, and scales applications, handling infrastructure, SSL, and CDN distribution without any server management.
Example: A solo developer connects their GitHub repository to App Platform and deploys a Python Django app in...
DigitalOcean's S3-compatible object storage service for storing and serving unstructured data such as images, videos, backups, and static assets. Spaces includes a built-in CDN for low-latency global delivery and supports S3-compatible APIs and client libraries.
Example: A SaaS company stores user-uploaded profile images in Spaces and enables the built-in CDN. Images ar...
Fully managed database clusters on DigitalOcean supporting PostgreSQL, MySQL, Redis, MongoDB, and Kafka. Managed Databases handle provisioning, patching, backups, failover, and scaling automatically, eliminating database operational overhead.
Example: A fintech startup moves from a self-managed PostgreSQL Droplet to DigitalOcean Managed Databases. Th...
DigitalOcean's fully managed Layer 4 (TCP) and Layer 7 (HTTP/HTTPS) load balancer that distributes incoming traffic across multiple Droplets or Kubernetes worker nodes. Supports sticky sessions, SSL termination, health checks, and both round-robin and least-connections algorithms.
Example: A gaming backend uses DigitalOcean Load Balancer to distribute WebSocket connections across 8 game s...
DigitalOcean's platform for building and deploying AI agents and applications using open-source large language models. The GenAI Platform provides a serverless inference API, knowledge base (RAG) integration, agent orchestration, and model selection from providers like Meta (Llama) and Mistral — at significantly lower cost than major cloud LLM APIs.
Example: A customer support startup builds an AI chatbot using DigitalOcean GenAI Platform with a Llama 3 70B...
DigitalOcean's serverless Functions-as-a-Service (FaaS) offering, powered by Apache OpenWhisk. Functions execute event-driven code in response to HTTP requests or scheduled triggers without managing servers. Supports Node.js, Python, PHP, and Go with automatic scaling to zero.
Example: A media startup uses DigitalOcean Functions to process image uploads asynchronously. When a user upl...
DigitalOcean Virtual Private Cloud — an isolated private network within a DigitalOcean region that allows Droplets and managed services to communicate securely using private IP addresses, isolated from the public internet and from other customers' networks.
Example: A healthcare company places all backend Droplets and their Managed PostgreSQL database inside a VPC....
DigitalOcean's built-in infrastructure monitoring service that collects CPU, memory, disk I/O, and network metrics from Droplets automatically. Includes alert policies, uptime monitoring, and integration with third-party tools. Available at no extra cost for all Droplets.
Example: A DevOps team configures DigitalOcean Monitoring alerts to page them via Slack when any Droplet's CP...
DigitalOcean's private container image registry (DOCR) for storing and distributing Docker container images. DOCR integrates directly with DOKS clusters, enabling Kubernetes worker nodes to pull images from the private registry without additional credential configuration. Supports image scanning and garbage collection.
Example: A development team pushes their Node.js application Docker image to DOCR after each CI build. Their ...
DigitalOcean's DNS hosting service that manages authoritative DNS records for domains, including A, AAAA, CNAME, MX, TXT, NS, and SRV records. Records can be programmatically managed via the DigitalOcean API and can reference Droplets, Load Balancers, and other resources by name for automatic IP resolution.
Example: A team manages their DNS for api.example.com using DigitalOcean Managed DNS. When they reassign a Re...
Point-in-time copies of a DigitalOcean Droplet's disk state, captured on-demand or on a schedule. Snapshots can be used to clone Droplets, migrate between regions, or restore a Droplet to a known-good state. Snapshots are stored in object storage and billed at $0.06/GB/month based on the compressed snapshot size.
Example: Before deploying a major database schema migration, a team takes a Droplet Snapshot of the applicati...
DigitalOcean's managed synthetic monitoring service that performs HTTP, HTTPS, TCP, and ping checks against endpoints at configurable intervals from multiple global locations. Uptime Monitoring alerts via email, Slack, or PagerDuty when an endpoint fails its health check, typically within 60 seconds of an outage.
Example: A startup monitors their API gateway with DigitalOcean Uptime Monitoring, polling `https://api.examp...
User-defined labels that can be applied to DigitalOcean resources (Droplets, volumes, images, load balancers) for organisation, filtering, and automation. Tags enable grouping resources by environment (production, staging), application, or team, and are used as selectors for Cloud Firewalls and API batch operations.
Example: A company tags all production Droplets with `env:production` and `app:checkout`. They create a Cloud...
DigitalOcean's catalog of pre-configured 1-Click Apps that deploy ready-to-run software on Droplets or DOKS clusters. Marketplace offerings include databases (MongoDB, Redis), web servers (LEMP, LAMP), CMS platforms (WordPress, Ghost), developer tools (Docker, MEAN Stack), and security tools — all deployed with a single click from the DigitalOcean control panel.
Example: A developer needs a WordPress site running in under 5 minutes. They select the WordPress 1-Click App...
A group of identically configured worker Droplets within a DOKS cluster that run Kubernetes pods. A DOKS cluster can have multiple node pools, each with different Droplet sizes and auto-scaling configurations — enabling workload separation between CPU-intensive batch jobs and memory-hungry web services within the same cluster.
Example: An ML platform uses two DOKS node pools: a general-purpose pool (s-4vcpu-8gb × 3) for API services a...
A YAML or JSON configuration file that declaratively defines the complete structure of a DigitalOcean App Platform application, including all services, workers, jobs, databases, domains, environment variables, and build configuration. App Specs enable infrastructure-as-code workflows and can be version-controlled in Git.
Example: A team stores their App Spec in `.do/app.yaml` in their repository. When they onboard a new microser...
DigitalOcean's network-attached SSD block storage volumes that can be attached to Droplets for persistent, high-performance storage independent of the Droplet's local disk. Volumes persist independently of Droplet lifecycle and can be detached and reattached to any Droplet in the same region. Supports volumes from 1 GB to 16 TB with consistent low-latency I/O.
Example: A PostgreSQL database server runs on a Droplet with a 500 GB Block Storage Volume. When the team nee...
DigitalOcean's managed stateful network firewall that filters inbound and outbound traffic at the network edge before it reaches Droplets. Cloud Firewalls are free, applied to Droplets by tag or ID, and enforce allow-list rules for TCP, UDP, ICMP, and specific port ranges — without requiring any configuration on the Droplet itself.
Example: A production web stack applies a DigitalOcean Cloud Firewall that allows inbound traffic only on por...
A static, publicly accessible IPv4 address that can be assigned to any Droplet in a DigitalOcean region and instantly reassigned to another Droplet. Reserved IPs enable failover architectures where traffic is redirected from a failing Droplet to a standby by reassigning the IP — with DNS changes propagating in seconds rather than minutes.
Example: A high-availability web application uses a Reserved IP as its public endpoint. When the active Dropl...
The built-in content delivery network edge caching layer included with DigitalOcean Spaces object storage. Spaces CDN caches content at globally distributed edge locations, reducing origin load and dramatically improving download speeds for geographically distributed users.
Example: A video streaming platform stores their HLS video segments in Spaces and enables Spaces CDN. Viewers...
DigitalOcean's strategic positioning as a cloud provider purpose-built for AI/ML workloads at developer and SMB scale. The AI-Native Cloud approach combines GPU Droplets (NVIDIA H100/A100), the GenAI Platform for LLM agents, and developer-first pricing to make AI infrastructure accessible without hyperscaler complexity or cost. Core pillars are: on-demand GPU compute, serverless inference APIs, managed knowledge bases for RAG, and integrated monitoring — all exposed through a consistent API and simple per-hour pricing.
Example: A YC-backed AI startup builds their entire inference stack on DigitalOcean's AI-Native Cloud: GPU Dr...
A software runtime that executes trained machine learning models to generate predictions or text from new input data. Inference engines optimize for latency, throughput, and hardware utilization — converting model weights into responses as efficiently as possible. Modern LLM inference engines (vLLM, TensorRT-LLM, llama.cpp) use techniques like continuous batching, KV-cache management, and quantization to maximize tokens-per-second on GPU hardware.
Example: A DigitalOcean GenAI Platform deployment uses an inference engine under the hood to serve Llama 3 70...