MakeCloud Logo



INSIGHTS

Should you deploy a Shared VPC in GCP?

Philip Wigg
June 17, 2022

Are you a Google Cloud Platform user who is considering implementing a Shared VPC in your organisation?

The official docs [1] from Google outline the use-case for Shared VPC and list the advantages.

I couldn’t find a post anywhere that lists the disadvantages, complications and pitfalls. I aim to address that here. But first the benefits…

Advantages of Shared VPC

Usually each project in a GCP organisation will have a VPC deployed within it. In fact, GCP automatically creates a “default” network in each new GCP project with a subnet in each region.

In contrast, a “Shared VPC” is deployed into a central “host” project and shared with other “service” projects.

This enables the users of the service projects (usually developers) to be able to deploy resources (VMs, etc.) into the Shared VPC. All resources deployed into the Shared VPC can then communicate securely over the network using the private address space.

This model has several benefits:-

  • Separation of duties. Routing, firewalls, subnet range allocation, etc. is all managed centrally – potentially by a separate team of network administrators.
  • Developers are still administrators of their own projects and keep control of billing, IAM, quotas, etc.
  • No need to deal with the complexity and overhead of VPC network peering between VPCs in different projects.
  • Simplify hybrid cloud – network links to on-premise environments need only to be created in the Shared VPC and can be then used by everyone.
  • Security – can be combined with a GCP Organisation Policy preventing VMs from being created with external IP addresses to help you control access to and from the internet.

All of this makes Shared VPC very attractive for enterprise customers looking to leverage the benefits.

When not to use Shared VPC?

It’s important to understand that Shared VPC is not a free lunch. When might you not want to use a Shared VPC?

  • Security. Separate VPCs, by definition, provide the highest level of network isolation. If you do adopt Shared VPC, consider a separate Shared VPC for production. [2]
  • You don’t need separation of duties and responsibilities and your users are happy to manage their own networking.
  • You want to use “bleeding edge” services that might not be compatible with Shared VPC when they are released.

Things to Know

Some products simply are not compatible with Shared VPC or there are complications. Let’s look at some examples…

App Engine Standard

Applications deployed into App Engine Standard cannot connect to resources deployed in a Shared VPC using private internal IP addresses. You would have to expose your service on the internet.

App Engine Flex works fine but, despite the similar name, App Engine Flex is a substantially different product and requires that you deploy containers.

Sometimes you may not wish to use App Engine Flex, because Standard has some advantages over App Engine Flex (for example, rapid scaling) which you may wish to leverage.

Cloud Memorystore (aka Managed Redis)

Even with a normal in-project VPC, GCP Cloud Memorystore instances are always created inside a separate, Google managed, VPC and VPC-peering is transparently created in the background.

It’s not quite as easy when a Shared VPC is involved because you will need to create a “private services access connection” in your Shared VPC. Essentially, this means you need to run a command which allocates an internal IP address range within your Shared VPC which will be used by the Cloud Memorystore instances. This requires some networking know-how and you may need your network team to help you with this step if you don’t have permissions to the Shared VPC host project.

It’s worth mentioning here too that Cloud Memorystore is a good example of a product which wasn’t compatible with Shared VPC when it was first released. It seems like new products often only work with project VPCs upon first launch so if you want the most bleeding edge new products that’s something to bear in mind too.

Google Kubernetes Engine (GKE)

GKE inside a Shared VPC works well. It’s one of GCPs flagship products and has good documentation for deployment inside a Shared VPC.

However, you can deploy a GKE cluster in a “normal” project VPC with one command so it’s fair to say the Shared VPC deployment does involve significant extra complexity. The GKE cluster service accounts will need various permissions in the Shared VPC host project to enable the cluster to function.

Kubernetes users are familiar with being able to deploy, say, an Ingress resource and have the Kubernetes cluster itself magically create the required cloud resources in the background (for example, a cloud load-balancer to facilitate the inbound internet access for the ingress). The issue here is that allowing inbound internet access requires some firewall rules to be created in the Shared VPC.

You can grant the GKE cluster the permission it requires on the Shared VPC host project so it can create firewall rules itself but there’s definitely a trade-off here between your network team keeping control over the Shared VPC firewalls or delegating that control back to the service project so that Kubernetes can manage the firewall rules it requires for ingress.

If you create an Ingress but the GKE cluster doesn’t have permissions to create an Ingress Load Balancer then GKE will log an error event every few minutes which contains the exact gcloud CLI command an administrator of the Shared VPC would need to run in order to create the required firewall rules – very useful.

Before deploying the GKE cluster, the admins of the Shared VPCs will also have to allocate dedicated “secondary IP ranges” from subnets of the Shared VPC. You’ll need one range for pods and one range for services. Again, some networking knowledge needed here and maybe the help of a friendly network administrator.

VPC Service Controls

VPC Service Controls allow GCP administrators to create an additional layer of security around their Google Cloud infrastructure such as Storage Buckets, Bigtable instances, etc.

This works by defining a “perimeter” around projects and resources and restricting access to anything outside of that defined perimeter.

My personal view is that the way that VPC Service Controls have been designed fundamentally clashes with the Shared VPC model. If I knew I wanted to use VPC Service Controls, I would not deploy a Shared VPC.

However, I will not go into detail here but instead refer you to this excellent blog post [5] which describes the issue along with a suggested workaround.

Conclusion

I hope that was some useful information for people thinking of deploying Shared VPC in Google Cloud!

I hope I have illustrated that Shared VPC is not a free lunch and that you must make sure that the trade-off is worth it for your organisation.

More Questions?

MakeCloud is a DevOps and cloud consultancy based in London. Please feel free get in touch if you have any questions about Google Cloud, AWS, DevOps, etc.

[1] https://cloud.google.com/solutions/best-practices-vpc-design#shared-vpc

[2] https://cloud.google.com/solutions/best-practices-vpc-design#multiple-host-project-multiple-service-projects-multiple-shared-vpc

[3] https://cloud.google.com/appengine/docs/the-appengine-environments

[4] https://cloud.google.com/vpc/docs/configure-serverless-vpc-access

[5] https://medium.com/google-cloud/gcp-vpc-sc-with-shared-vpc-network-526f85377cdd

Share this post…

Get our insights direct to your inbox.


Further reading…

Ready to learn more?

If you would like to know more about MakeCloud or have a project you would like to discuss with us, then please get in touch.

Thanks for your interest! Please enter your work email address and we'll be in contact asap.