June 9, 2020 2:51 pm Published by Philip Wigg

Are you a Google Cloud user who is considering implementing a Shared VPC in your organisation? The official docs from Google outline the use-case for Shared VPC and list the advantages.

I couldn’t find anywhere a list of potential issues and pitfalls – including which products don’t play nicely with Shared VPC. I aim to address that here.

Why use a Shared VPC?

Usually each project in a GCP organisation will have a “project VPC” deployed within it. In fact, GCP automatically creates a “default” VPC in each new GCP project with a subnet in each region.

In contrast, a “Shared VPC” is deployed into a central “host” project and shared with other “service” projects.

This enables the users of the service projects (usually developers) to be able to deploy resources (VMs, etc.) into the Shared VPC. All resources deployed into the Shared VPC can then communicate securely over the network using the private address space.

This model has several benefits:-

  • Separation of duties. Routing, firewalls, subnet range allocation, etc. is all managed centrally – potentially by a separate team of network administrators.
  • Developers are still administrators of their own projects and keep control of billing, IAM, quotas, etc.
  • No need to deal with the complexity and overhead of VPC network peering between VPCs in different projects.
  • Simplify hybrid cloud – network links to on-premise environments need only to be created in the Shared VPC and can be then used by everyone.
  • Security – can be combined with a GCP Organisation Policy preventing VMs from being created with external IP addresses to help you control access to and from the internet.

When not to use Shared VPC?

It’s important to understand that Shared VPC is not a free lunch. When might you not want to use a Shared VPC?

  • Security. Separate VPCs, by definition, provide the highest level of network isolation. If you do adopt Shared VPC, consider a separate Shared VPC for production.
  • You don’t need separation of duties and responsibilities and your users are happy to manage their own networking.

However, I feel the main disadvantage for most users will be the additional complexity using Shared VPC can introduce when using some other common GCP products and services.

Let’s look at some examples…

App Engine Standard

Applications deployed into App Engine Standard cannot connect to resources deployed in a Shared VPC using private internal IP addresses. You would have to expose your App Engine Standard service on the internet.

App Engine Flex works fine but, despite the similar name, App Engine Flex is a substantially different product and requires that you deploy containers. App Engine Standard also has some advantages over App Engine Flex (for example, rapid scaling) which you may wish to leverage.

Cloud Memorystore (aka Redis)

Even with a normal in-project VPC, GCP Cloud Memorystore instances are always created inside a separate, Google managed, VPC and VPC-peering is transparently created in the background.

It’s not quite as easy when a Shared VPC is involved because you will need to create a “private services access connection” in your Shared VPC. Essentially, this means you need to run a command which allocates an internal IP address range within your Shared VPC which will be used by the Cloud Memorystore instances. This requires some networking know-how and you may need your network team to help you with this step if you don’t have permissions to the Shared VPC host project.

It’s worth mentioning here too that Cloud Memorystore is a good example of a product which wasn’t compatible with Shared VPC when it was first released. It seems like new products often only work with project VPCs upon first launch so if you want the most bleeding edge new products that’s something to bear in mind too.

Google Kubernetes Engine (GKE)

GKE inside a Shared VPC works well. It’s one of GCPs flagship products and has good documentation for deployment inside a Shared VPC.

However, you can deploy a GKE cluster in a “normal” project VPC with one command so it’s fair to say the Shared VPC deployment does involve significant extra complexity. The GKE cluster service accounts will need various permissions in the Shared VPC host project to enable the cluster to function.

Kubernetes users are familiar with being able to deploy, say, an K8s Ingress resource and have the Kubernetes cluster itself magically create the required cloud resources in the background (for example, a cloud load-balancer to facilitate the inbound internet access for the ingress). The issue here is that allowing inbound internet access requires some firewall rules to be created in the Shared VPC.

You can grant the GKE cluster the permission it requires on the Shared VPC host project so it can create firewall rules itself but there’s definitely a trade-off here between your network team keeping control over the Shared VPC firewalls or delegating that control back to the service project so that Kubernetes can configure the firewall rules it needs for inbound internet access.

If you create an Ingress but the GKE cluster doesn’t have permissions it needs in the Shared VPC project then GKE will log an error event every few minutes which contains the exact gcloud CLI command an administrator of the Shared VPC would need to run in order to create the firewall rules that are required – very useful.

Before deploying the GKE cluster, the admins of the Shared VPCs will also have to allocate dedicated “secondary IP ranges” from subnets of the Shared VPC. You’ll need one range for pods and one range for services. Again, some networking knowledge needed here and maybe the help of a friendly network administrator.

Cloud Functions

Give up hope – Cloud Functions can’t access resources inside a Shared VPC using their internal IP addresses. So again you’ll have to expose the services on the public internet or use a non-shared VPC. Cloud Functions can access resources in a non-shared VPC using Serverless VPC Access [4] but it’s not compatible with Shared VPC.

VPC Service Controls

VPC Service Controls allow GCP administrators to create an additional layer of security around their Google Cloud infrastructure such as Storage Buckets, Bigtable instances, etc.

This works by defining a “perimeter” of projects and resources and restricting access to anything outside of the defined perimeter.

My personal view is that the way that VPC Service Controls have been designed fundamentally clashes with the Shared VPC model. If I knew I wanted to use VPC Service Controls, I would not deploy a Shared VPC.

However, I will not go into detail here but instead refer you to this excellent blog post which describes the issue along with a suggested workaround.

Conclusion

I hope that was some useful information for people thinking of deploying Shared VPC in Google Cloud and the trade-offs that are involved.

Please feel free to get in touch with me at phil@makecloud.io if you have any questions.

Tags: , ,
Back to Insights