Create a reliable and scalable cloud infrastructure that can adapt to a growing and evolving business.
- 176 Topics
- 398 Replies
Let us introduce you to our Infrastructure Moderators: Dickson VictorUsername: vickBio:I am a passionate learner and a tech enthusiast. I love impacting and contributing to the knowledge of others. I feel a lot of fulfilment when I am able to help another person solve a challenge or fix a problem. After coming from a non-tech background and successfully pivoting into tech, I have learned to be empathetic and understand the pain-points of new learners or entrants trying to break into a tech career.Company: NVIT- New Vision Institute of Technology Job Title: DevOps Engineer Seiji Manoan SeoUsername: seijimanoanBio:My name is Seiji. I’m a 30-year-old dad of two based in Brazil. I have a BS in Management and an MBA in Cloud Computing. I have worked in IT for over ten years. I began with web development and simple automation scripts, and then went on to full-stack development, mobile, and infrastructure. In the past 3 years, I've had experience working for an investment broker, a
🎉New Cloud Armor features including rate limiting, adaptive protection, and bot defense General Availability of new capabilities in Cloud Armor that can greatly improve the security, reliability, and availability of deployments, including:Per-client rate limiting with two new rule actions: “throttle” and “rate_based_ban”; Bot management with reCAPTCHA Enterprise; and Machine learning-based Adaptive Protection to help counter advanced Layer 7 attacksAlso, we are announcing the availability of new Cloud Armor features in Preview, including:Updated preconfigured WAF rules based on CRS 3.3, and Network-based threat intelligence to help block known bad traffic These new capabilities help provide enterprise-ready DDoS protection and web application firewall (WAF) solutions at planet-scale for our customers’ workloads, be they located on-premises, in colocation, or in any public cloud. 👉 Read further https://cloud.google.com/blog/products/identity-security/announcing-new-cloud-armor-rate-li
Docker is an open source containerization platform. It is a platform that makes it easier, simpler, and safer in building, deploying, and managing containerized applications. Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allows you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way. Read more here
Hi There!We've come across a challenge using an internal HTTP load balancer (L7) and could use some advice.We're working on a project in Canada that requires GPUs (which GCP does not offer in Canada). Our Cloud Run project needs to stay in Canada due to speed/latency and data laws, however, this is forcing us to use VMs (GCE) in the US to gain access to GPUs.We want to internal load balance (HTTP) on our VPC (Cloud Run as the client in the Canada region and GCE+GPUs as the backend in a US region), but this doesn't seem possible. It appears the only way to load balance across regions is to use a TCP load balancer, which doesn't work as well (doesn't allow us to scale on metrics like number of requests or requests/second).We've considered setting up an Nginx proxy and other types of proxies that would allow us to cross regions, but it would be so much easier to use a GCP-native solution that autoscales.Any suggestions?Thanks!
It’s now GA! General availability of the PostgreSQL interface for Cloud Spanner. The PostgreSQL interface is available for no additional cost in all regional and multi-regional Spanner configurations. The PostgreSQL interface is a new way to access Spanner. It combines the familiarity and portability of PostgreSQL with the unmatched scalability and fully managed experience of Spanner. Devops teams that have scaled their databases with brittle sharding or complex replication can now simplify their architecture with Spanner, using the tools and skills they already have. Because it’s PostgreSQL, you can be sure that the schemas and queries you write in Spanner are easily portable. And because it‘s Spanner, you can trust that it will grow with your business and development team. Try it out today using a new granular Spanner instance, starting at $65 USD/month, or as low as $40 USD/month with a three-year commitment. PostgreSQL DDL and DML from psql 👉 Read further on https://cloud.google.c
Setting Up Your GCP Foundations Through Terraform — Chapter 2 — Access to GCP, Setting up Billing & Preparing Github
A bit later than promised, but here it is: Setting Up Your GCP Foundations Through Terraform — Chapter 2 — Access to GCP, Setting Up Billing & Preparing Github Coming next week: Deploying your bootstrap project and laying the CI/CD foundations.
Over the last month, I have been spending a lot of time #learning and getting hands-on with #terraform. So I decided to start writing a weekly blog post on how to build a secure #gcp foundation for your application through Terraform. Chapter 1 is live now on #medium - https://medium.com/@goodmanjoel2017/setting-up-your-gcp-foundations-through-terraform-chapter-1-introduction-first-steps-33bd11e949e5
I just completed a Qwiklabs demo on Firebase.As a developer, Firebase allows you to create Android, IOS and Web applications easily and quickly, just like WordPress. In this demo, I configured and deployed a chat client application using Firebase. I implemented the following tasks;- Sync data using the Cloud Firestore and Cloud Storage for Firebase.- Authenticate users using Firebase Auth.- Deploy the web app on Firebase Hosting.- Send notifications with Firebase Cloud Messaging.You can try it out on qwiklabs: https://www.qwiklabs.com/focuses/660?parent=catalog#developer #firebase #qwiklabs
Network Analyzer offers an out-of-the-box suite of always-on analyzers that continuously monitor GCE and GKE network configuration. These analyzers run in the background, monitoring network services like load balancers, hybrid connectivity, and connectivity to Google services like Cloud SQL. As users continually push out config changes or the metrics for their deployment changes, the relevant analyzers will automatically surface failure conditions or suboptimal configurations. Surfacing insights through Network AnalyzerNetwork Analyzer prioritizes and proactively surfaces insights to users at a project level or across multiple projects. It identifies the root cause of the surfaced insight and provides a link to the documentation with recommendations to fix the insight. 👉Read further Introducing Network Analyzer: detect service and network issues | Google Cloud Blog
PostgreSQL uses transaction IDs (also called TXIDs or XIDs) to implement Multi-Version Concurrency Control semantics (MVCC).To prevent transaction ID wraparound, PostgreSQL uses a vacuum mechanism, which operates as a background task called autovacuum (enabled by default), or it can be run manually using the VACUUM command. A vacuum operation freezes committed transaction IDs and releases them for further use. You can think of this mechanism as “recycling” of transaction IDs that keeps the database operating despite using a finite number to store the transaction ID.Vacuum can sometimes be blocked due to workload patterns, or it can become too slow to keep up with database activity. If transaction ID utilization continues to grow despite the freezing performed by autovacuum or manual vacuum, the database will eventually refuse to accept new commands to protect itself against TXID wraparound. To help you monitor your database and ensure that this doesn’t happen, Cloud SQL for PostgreSQL
Run production workloads for as low as $40/month 😀 Cloud Spanner is a relational database service that offers industry leading 99.999% availability, and near unlimited scale to handle even the most demanding of workloads. With granular instance sizing, at a much lower cost you can still get all of the Spanner benefits like transparent replication across zones and regions, high-availability, resilience to different types of failures, and the ability to scale up and down as needed without any downtime. And with Committed Use Discounts, the entry price for production workload further reduces to less than $40/month as you receive a 40% discount for a 3-year commitment.Read further https://cloud.google.com/blog/products/databases/use-spanner-at-low-cost-with-granular-instance-sizing
Well, it was in preview... But, finally it got GA 😎You can now begin deploying Spot VMs in your Google Cloud projects to start saving now. For an overview of Spot VMs, see our Preview launch blog and for a deeper dive, check out our Spot VM documentation.Modern applications such as microservices, containerized workloads, and horizontal scalable applications are engineered to persist even when the underlying machine does not. This architecture allows you to leverage Spot VMs to access capacity and run applications at a low price. You will save 60 - 91% off the price of our on-demand VMs with Spot VMs.To make it even easier to utilize Spot VMs, we’ve incorporated Spot VM support in a variety of tools. GKE for instance 🚀Read further https://cloud.google.com/blog/products/compute/google-cloud-spot-vms-now-ga
We’re facing two odd (and related) problems: There doesn’t seem to be a way to establish a PTR record for our primary domain’s IP address, as it’s assigned to a GCP load balancer. A reverse lookup for the IP we’re using maps to a .bc.googleusercontent.com domain, and this domain provides a cached, persistent, NON-SSL, and Google indexed version of our side, none of which can be Good.Has anyone had any experience with either of these issues? Thanks!
Google Cloud Managed Service for Prometheus lets you monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale.Google Cloud Managed Service for Prometheus lets you monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale.Managed Service for Prometheus is Google Cloud's fully managed storage and query service for Prometheus metrics. This service is built on top of Monarch, the same globally scalable data store as Cloud Monitoring. A thin fork of Prometheus replaces existing Prometheus deployments and sends data to the managed service with no user intervention. This data can then be queried by using PromQL through the Prometheus Query API supported by the managed service and by using the existing Cloud Monitoring query mechanisms.You can explore more Here
While using internal load balancer, scale out event happens (due to traffic metric activation), a new instance is created.At this point I noticed that all existing instances do not get traffic AT ALL from the load balancer for at least 10 seconds (traffic drops to zero).This cause a sever traffic failures to the backend and only after a few minutes when the scale out event completes the old instanced get traffic back.
Hey there 😊 I come here to share a blog post authored by me. Those tips can help someone to deploy Falco on GKE for sure.Falco is a HIDS tool, open source and backed by CNCF.Falco, the cloud-native runtime security project, is the de facto Kubernetes threat detection engine Falco is the first runtime security project to join CNCF as an incubation-level project. Falco acts as a security camera detecting unexpected behavior, intrusions, and data theft in real time.Amazing, huh?Make your workloads safer right now!👉 See how on Begin with Falco deployment on GKE (seiji.com.br)
Today at Google I/O, we’re thrilled to announce the preview of AlloyDB for PostgreSQL, a fully-managed, PostgreSQL-compatible database service that provides a powerful option for modernizing your most demanding enterprise database workloads.Compared with standard PostgreSQL, in our performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases. Read more: at here.
Data loss and security breaches are becoming increasingly common events in today’s world. It is not a matter of when, but if a disaster of any kind will happen. All of an organization’s information must be protected and readily available at all times in order for a business to survive. Considering this fact, the importance of backups cannot be overestimated. However, while backing up vital data is an integral part of any business’s IT strategy, having backups − whether they are a cloud backup or on-prem − is not the same as having a disaster recovery plan. Differentiating backup from disaster recovery can help you develop effective strategies for avoiding the consequences of downtime and business disruptions.Understanding the basics of backup and disaster recovery is critical for minimizing the impact of unplanned downtime on your business. Across all industries, organizations recognize that downtime can quickly result in lost sales and revenue, interrupted service, possible supply ch
Anthos Clusters on Azure and Anthos Clusters on AWS now support Kubernetes versions 1.22.8-gke.200 and 1.21.11-gke.100
With Anthos Multi-Cloud, you can create Kubernetes clusters in both AWS and Azure cloud environments. Anthos clusters integrate with cloud provider specific resources such as load balancers and persistent storage.With a single-cloud platform design, you are forced into using highly-integrated cloud-specific tools. This creates one silo for each cloud environment. With Anthos clusters, you can deploy workloads to multiple clouds with a unified management, signaling, and configuration control plane.You can manage your Kubernetes clusters on AWS and Azure from the Google Cloud console. With Connect gateway, you can connect to your clusters in any cloud with your Google Cloud identity for authentication.For more information about other Anthos clusters environments, see Anthos clusters.Let me know what you think about this latest announcement from Google in the comments section.
Cloud networking refers to the ability to connect two resources together inside a cloud, across clouds and with on-premises data centers. A cloud provider needs to provide three main types of connectivity:Site-to-cloud - Between on-premises equipment and cloud resources Site-to-site - To connect on-premises resources together VPC-to-VPC - Connectivity between cloud resourcesLet’s take a look at each one!👉 Come on source.
Every video shared, every email sent and app downloaded depends on data traffic that moves through international network infrastructure. How is this content magically available to people within milliseconds? It’s thanks to a rich ecosystem of companies and local providers who build global infrastructure that provides businesses and people around the world with the best possible experience for browsing, video conferencing, streaming, and much more. At Google, this work ranges from building and operating highly secure data centers and network “highways” traversing the globe, to maintaining the Google Global Cache that stores popular content near its users. Read more at https://cloud.google.com/blog/products/infrastructure/google-network-infrastructure-investments
Hey all, awesome to be a part of this community! Is anyone here running most of their backend on cloud functions and have any tips on CI/CD and organization of large scale cloud functions backends? Right now we have a fairly robust testing and staging flow but we’re still manually deploying cloud functions because there doesn’t seem to be an intuitive way to automatically deploy the functions and code that has been changed. Has anyone else run into this problem and have any tips and ideas on how to organize projects like this and deploy them at scale. Thanks!
Login to the community
Social LoginLogin With Your C2C Credentials
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.