I am hosting a ecommerce application here on google cloud. All is well to an extent with my current situation.
I have a single application load balanced with two stores. I have three instance groups:
- Store 1
- Store 2
- Admin(for running cron jobs and consumers)
My primary region is us-central1. My CloudSQL(MySQL 8.0) is in the same region.
I also have a managed Redis setup who is in the same region.
Each instance group has one instance in the same regions.
Whenever I create an instance group in say, us-west1 and add that backend to my load balancer. Traffic is extremely slow for connections that hit that server.
My assumption was a highly available load balancer would properly route traffic to the closest resources for optimal performance. I guess I’m missing something here.
I noticed when I created a CloudSQL private IP, it created a VPC peer group(servicenetworking-googleapis-com). It only included two inbound routes for us-central1, not for us-west1 as well. See screenshot: https://i.imgur.com/QGJM5tk.png
My question is, what do I need to do to make it so my application in regions outside my primary region aren’t super slow. Currently, my uptime monitor says its 2s to get a response from my west server, as opposed to 185ms for my central server.
Best answer by hi5guyView original
So you do have 2 frontends and 1 backend (Cloud SQL + Cloud Memorystore), each frontend in 2 regions.
Your frontend in us-west1 is connecting to your backend in us-central1, so that’s where your latency (and network egress costs) are coming from.
Either use another local Cloud Memorystore for a cache in the 2nd region or a full Cloud SQL read replica, although writes would still be done to the primary MySQL replica.
You can also enable Cloud CDN in your HTTPS LB to improve egress costs and latency.
In this case, just using 2 regions and a global LB doesn’t give you full replication if your backend is not also replicated, or going the cache/CDN route to improve your connection time and costs.
Reply or mention if you need further help!
I actually just solved this moments ago on my own.
I edited the servicenetworking-googleapis-com peering connection checking the ‘Import custom routes’ option and my response time from the west region dropped from 2s to 0.29ms.
Thank you for the prompt and thorough response!