-large.webp)
Table of Contents
If you’ve ever tried to connect a GKE cluster to Cloud SQL or Memorystore with a private IP, you’ve probably hit the wall of acronyms: VPC Peering, PSA, PSC. The documentation is scattered, and most tutorials skip the “why” entirely.
This article breaks down each approach, when to use it, and the trade-offs you’ll actually care about in production.
The Problem: Your App Needs to Talk to Managed Services
You have a VPC. Inside it, your GKE pods, Compute Engine VMs, or Cloud Run services are humming along. Now you need a database (Cloud SQL), a cache (Memorystore Redis), or another managed service — and you don’t want traffic leaving your private network.
Google manages these services in their own VPC networks, not yours. So the fundamental question is: how do you create a private network path between your VPC and Google’s?
There are three mechanisms, each designed for different scenarios.
1. VPC Network Peering
What It Is
VPC Peering creates a direct network connection between two VPC networks. Routes are exchanged, and instances in both networks can communicate using private IPs — no public internet, no gateways, no proxies.
How It Works
Key properties:
- Bidirectional — both sides can initiate connections
- Non-transitive — if VPC-A peers with VPC-B, and VPC-B peers with VPC-C, VPC-A cannot reach VPC-C through VPC-B
- No IP overlap allowed — the two VPCs cannot share any CIDR ranges
- Limit: 25 peering connections per VPC
When to Use
- Connecting your own VPCs (e.g., a shared-services VPC peered with application VPCs)
- Multi-team setups where each team owns a VPC
- Hub-and-spoke network architectures
When NOT to Use
- Connecting to Google-managed services (Cloud SQL, Memorystore, etc.) — you can’t directly peer with Google’s service producer network. That’s what PSA and PSC are for.
2. Private Service Access (PSA)
What It Is
PSA is Google’s mechanism for connecting your VPC to Google’s service producer network — the internal Google-managed VPC where services like Cloud SQL, Memorystore, and AlloyDB actually run.
Under the hood, PSA is VPC Peering — but it’s a special, managed peering connection to servicenetworking.googleapis.com.
How It Works
You reserve an IP range in your VPC and “donate” it to Google’s service network. Google then creates VMs for your Cloud SQL or Memorystore instance in that range — so from your VPC’s perspective, the service has a normal private IP.
The Terraform / OpenTofu Setup
# Step 1: Reserve an IP range for Google
resource "google_compute_global_address" "psa_range" {
name = "my-psa-range"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 20 # /20 = 4,096 IPs
network = google_compute_network.my_vpc.id
}
# Step 2: Create the peering connection
resource "google_service_networking_connection" "psa" {
network = google_compute_network.my_vpc.id
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.psa_range.name]
}
# Step 3: Cloud SQL can now use private IP
resource "google_sql_database_instance" "db" {
settings {
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.my_vpc.id
}
}
depends_on = [google_service_networking_connection.psa]
} Pros
- Mature and battle-tested — available since 2018, widely adopted
- Simple mental model — managed services get IPs on your network, reachable like any other private IP
- Bidirectional — Google services can reach back into your VPC (needed for Cloud SQL import/export, PITR)
- One setup, many services — a single PSA connection serves Cloud SQL, Memorystore, AlloyDB, and more
Cons
- IP range reservation required — you must allocate a /16 or /20 (or similar) upfront, which consumes address space even if unused
- Shared peering — all PSA services share the same peering connection
- Counts toward peering quota — the 25-peering-per-VPC limit applies
- Range planning complexity — the reserved range must not overlap with subnets, pod ranges, or service ranges
Services That Support PSA
| Service | PSA Supported | Notes |
|---|---|---|
| Cloud SQL | Yes | Full feature support including PITR |
| Memorystore Redis | Yes | Via PRIVATE_SERVICE_ACCESS connect mode |
| AlloyDB | Yes | Default connectivity model |
| Memorystore Memcached | Yes | Default connectivity model |
3. Private Service Connect (PSC)
What It Is
PSC is Google’s newer, more granular approach to private connectivity. Instead of peering entire networks, PSC creates a dedicated endpoint (a forwarding rule with a single IP) in your VPC that points to a specific service instance.
How It Works
Each managed service instance gets its own dedicated endpoint — a single IP address in your VPC that routes to that specific instance.
The Terraform / OpenTofu Setup
# Step 1: Create Cloud SQL with PSC enabled
resource "google_sql_database_instance" "db" {
settings {
ip_configuration {
psc_config {
psc_enabled = true
allowed_consumer_projects = [var.project_id]
}
}
}
}
# Step 2: Create PSC endpoint in your VPC
resource "google_compute_address" "psc_endpoint" {
name = "cloudsql-psc"
subnetwork = google_compute_subnetwork.my_subnet.id
address_type = "INTERNAL"
address = "10.0.0.50"
}
resource "google_compute_forwarding_rule" "psc" {
name = "cloudsql-psc"
target = google_sql_database_instance.db.psc_service_attachment_link
network = google_compute_network.my_vpc.id
ip_address = google_compute_address.psc_endpoint.id
load_balancing_scheme = ""
} Pros
- No IP range reservation — uses a single IP per endpoint, no wasted address space
- Per-instance isolation — each service instance has its own dedicated endpoint
- No peering quota consumed — doesn’t count toward the 25-peering limit
- Multi-VPC friendly — endpoints can be created in multiple VPCs pointing to the same service
- Google’s recommended direction — actively being expanded to more services
Cons
- Unidirectional — your VPC can reach the service, but the service cannot initiate connections back to your VPC
- More setup per instance — each service instance needs its own forwarding rule and IP
- Feature limitations — Cloud SQL with PSC doesn’t support external replicas or certain import/export operations that require reverse connectivity
- DNS configuration needed — you need to set up DNS to resolve the service’s hostname to the PSC endpoint IP
Services That Support PSC
| Service | PSC Supported | Notes |
|---|---|---|
| Cloud SQL | Yes | Limited: no external replicas, some import/export restrictions |
| Memorystore Redis | No | Use DIRECT_PEERING or PSA instead |
| AlloyDB | Yes | Fully supported |
| Vertex AI | Yes | Fully supported |
4. The Special Case: Direct Peering (Memorystore Redis)
Memorystore Redis has a unique option that doesn’t fit neatly into the PSA/PSC model: DIRECT_PEERING.
When you create a Redis instance with connect_mode = "DIRECT_PEERING", Google automatically creates a VPC peering connection between your VPC and the Redis instance’s network. No PSA setup needed — no IP range reservation, no service networking connection.
resource "google_redis_instance" "cache" {
name = "my-cache"
memory_size_gb = 2
authorized_network = google_compute_network.my_vpc.id
connect_mode = "DIRECT_PEERING" # This is the default
} This is the simplest connectivity model — zero VPC-level prerequisites. The trade-off is that each Redis instance creates its own peering, which counts toward your 25-peering limit.
Decision Matrix: Which Should You Use?
| Criteria | VPC Peering | PSA | PSC | Direct Peering |
|---|---|---|---|---|
| Use case | Your VPCs | Managed services | Managed services | Memorystore Redis |
| Setup complexity | Low | Medium | High | None |
| IP planning | Avoid overlap | Reserve large range | Single IP/endpoint | Automatic |
| Peering quota | Yes (1 per peer) | Yes (1 for all) | No | Yes (1 per instance) |
| Directionality | Bidirectional | Bidirectional | Unidirectional | Bidirectional |
| Isolation | Per-VPC | Shared across svc | Per-instance | Per-VPC |
| Address space | None | High (/16 or /20) | Minimal (1 IP) | None |
Practical Recommendations
Small to Medium Deployments (1 VPC, few managed services)
Use PSA. It’s the simplest production-ready setup:
- One-time IP range reservation
- One peering connection covers Cloud SQL, Memorystore, AlloyDB
- Bidirectional connectivity means all features work (PITR, import/export)
- Use
DIRECT_PEERINGfor Redis if you want to avoid PSA for that service
Large / Multi-VPC Deployments
Use PSC exclusively:
- PSC doesn’t consume peering quota — critical when you have many VPCs
- Per-instance endpoints give better security isolation
- Fall back to PSA for services that don’t support PSC yet
- Use
DIRECT_PEERINGfor Redis
Multi-Tenant / Strict Isolation Requirements
Use PSC exclusively:
- Each tenant’s database gets its own isolated endpoint
- No shared IP ranges between tenants
- Fine-grained IAM control per endpoint
Common Pitfalls
1. IP Range Conflicts with PSA
Your PSA reserved range must not overlap with:
- Subnet primary ranges
- GKE pod secondary ranges
- GKE service secondary ranges
- Any other reserved ranges
Plan your IP addressing upfront. A common scheme: 10.0.0.0/20 for subnets, 10.1.0.0/20 for PSA, 10.2.0.0/14 for GKE pods.
2. Forgetting depends_on with PSA
Cloud SQL will fail to create if the PSA connection isn’t ready. Always add:
resource "google_sql_database_instance" "db" {
...
depends_on = [google_service_networking_connection.psa]
} 3. Assuming PSC Is Bidirectional
If your Cloud SQL instance needs to reach back into your VPC (certain import/export operations, some replication scenarios), PSC won’t work. You need PSA.
4. Hitting the 25-Peering Limit
Each DIRECT_PEERING Redis instance and each PSA connection consumes a peering slot. If you’re running many Redis instances across environments, consider using PSA mode for Redis instead (connect_mode = "PRIVATE_SERVICE_ACCESS").
The Future
Google is clearly investing in PSC as the long-term direction. More services are adding PSC support with each release, and the feature gap between PSA and PSC is narrowing. But PSA isn’t going away — it’s too deeply embedded in production workloads.
My advice: Start with PSA for simplicity. Move to PSC when you hit scale, need multi-VPC connectivity, or require per-instance network isolation. Use DIRECT_PEERING for Redis when you want zero-config simplicity and aren’t worried about peering quota.
The best networking decision is the one your team can operate confidently at 2 AM during an incident.