Create a reliable and scalable cloud infrastructure that can adapt to a growing and evolving business.
- 339 Topics
- 615 Replies
🚀 Run your Arm workloads on Google Kubernetes Engine with Tau T2A VMs Earlier today, we announced Google Cloud’s virtual machines (VMs) based on the Arm architecture on Compute Engine. Called Tau T2A, these VMs are the newest addition to the Tau VM family that offers VMs optimized for cost-effective performance for scale-out workloads. We are also thrilled to announce that you can run your containerized workloads on the Arm architecture using GKE. Arm nodes come packed with the key GKE features you love on the x86 architecture, including the ability to run in GKE Autopilot mode for a hands-off experience, or on GKE Standard clusters where you manage your own node pools. See the ‘Key GKE features’ below for more details. 👉 Read further https://cloud.google.com/blog/products/containers-kubernetes/gke-supports-new-arm-based-tau-t2a-vms
Dear Team,I’ve installed the managed prometheus setup and trying to setup the promethus and grafana but while executing the port forwarding command to 3000 getting the port is already in use which is cloud shell service is running on that. How do I moved the cloud shell service on different port.
🎉New Cloud Armor features including rate limiting, adaptive protection, and bot defense General Availability of new capabilities in Cloud Armor that can greatly improve the security, reliability, and availability of deployments, including:Per-client rate limiting with two new rule actions: “throttle” and “rate_based_ban”; Bot management with reCAPTCHA Enterprise; and Machine learning-based Adaptive Protection to help counter advanced Layer 7 attacksAlso, we are announcing the availability of new Cloud Armor features in Preview, including:Updated preconfigured WAF rules based on CRS 3.3, and Network-based threat intelligence to help block known bad traffic These new capabilities help provide enterprise-ready DDoS protection and web application firewall (WAF) solutions at planet-scale for our customers’ workloads, be they located on-premises, in colocation, or in any public cloud. 👉 Read further https://cloud.google.com/blog/products/identity-security/announcing-new-cloud-armor-rate-li
Hi There!We've come across a challenge using an internal HTTP load balancer (L7) and could use some advice.We're working on a project in Canada that requires GPUs (which GCP does not offer in Canada). Our Cloud Run project needs to stay in Canada due to speed/latency and data laws, however, this is forcing us to use VMs (GCE) in the US to gain access to GPUs.We want to internal load balance (HTTP) on our VPC (Cloud Run as the client in the Canada region and GCE+GPUs as the backend in a US region), but this doesn't seem possible. It appears the only way to load balance across regions is to use a TCP load balancer, which doesn't work as well (doesn't allow us to scale on metrics like number of requests or requests/second).We've considered setting up an Nginx proxy and other types of proxies that would allow us to cross regions, but it would be so much easier to use a GCP-native solution that autoscales.Any suggestions?Thanks!
It’s now GA! General availability of the PostgreSQL interface for Cloud Spanner. The PostgreSQL interface is available for no additional cost in all regional and multi-regional Spanner configurations. The PostgreSQL interface is a new way to access Spanner. It combines the familiarity and portability of PostgreSQL with the unmatched scalability and fully managed experience of Spanner. Devops teams that have scaled their databases with brittle sharding or complex replication can now simplify their architecture with Spanner, using the tools and skills they already have. Because it’s PostgreSQL, you can be sure that the schemas and queries you write in Spanner are easily portable. And because it‘s Spanner, you can trust that it will grow with your business and development team. Try it out today using a new granular Spanner instance, starting at $65 USD/month, or as low as $40 USD/month with a three-year commitment. PostgreSQL DDL and DML from psql 👉 Read further on https://cloud.google.c
Setting Up Your GCP Foundations Through Terraform — Chapter 2 — Access to GCP, Setting up Billing & Preparing Github
A bit later than promised, but here it is: Setting Up Your GCP Foundations Through Terraform — Chapter 2 — Access to GCP, Setting Up Billing & Preparing Github Coming next week: Deploying your bootstrap project and laying the CI/CD foundations.
Over the last month, I have been spending a lot of time #learning and getting hands-on with #terraform. So I decided to start writing a weekly blog post on how to build a secure #gcp foundation for your application through Terraform. Chapter 1 is live now on #medium - https://medium.com/@goodmanjoel2017/setting-up-your-gcp-foundations-through-terraform-chapter-1-introduction-first-steps-33bd11e949e5
I’m proud and excited to announce Managecore was named Google Partner of the Year for SAP Specialization. “Based on their certified, repeatable customer success and strong technical capabilities, we’re proud to recognize Managecore as Specialization Partner of the Year - SAP on Google Cloud.”said Nina Harding, Global Chief, Partner Programs and Strategy, Google Cloud Specialization Read the full Managecore announcement >>> https://managecore.com/news/managecore-wins-googlecloud-specialization-partner-of-the-year-sap-on-google-cloud-award/View Google Cloud’s Partner Announcement >>> https://cloud.google.com/blog/topics/partners/google-cloud-announces-2021-partner-of-the-year-awards
I just completed a Qwiklabs demo on Firebase.As a developer, Firebase allows you to create Android, IOS and Web applications easily and quickly, just like WordPress. In this demo, I configured and deployed a chat client application using Firebase. I implemented the following tasks;- Sync data using the Cloud Firestore and Cloud Storage for Firebase.- Authenticate users using Firebase Auth.- Deploy the web app on Firebase Hosting.- Send notifications with Firebase Cloud Messaging.You can try it out on qwiklabs: https://www.qwiklabs.com/focuses/660?parent=catalog#developer #firebase #qwiklabs
In case you haven’t already registered for this event, it’s promising to be an insightful discussion regarding Google-native solutions for integrating SAP data with Google Cloud services and solutions.The talk will outline current solutions and those that are in the product roadmap and it will showcase specific topics such as real time data replication from SAP to BQ and how to build complex ETL pipelines with a multitude of SAP connectors in Cloud Data Fusion, and more! If you are unable to attend and would like to submit a question, please do so in the comments and we’ll make sure to share this at the event!
Network Analyzer offers an out-of-the-box suite of always-on analyzers that continuously monitor GCE and GKE network configuration. These analyzers run in the background, monitoring network services like load balancers, hybrid connectivity, and connectivity to Google services like Cloud SQL. As users continually push out config changes or the metrics for their deployment changes, the relevant analyzers will automatically surface failure conditions or suboptimal configurations. Surfacing insights through Network AnalyzerNetwork Analyzer prioritizes and proactively surfaces insights to users at a project level or across multiple projects. It identifies the root cause of the surfaced insight and provides a link to the documentation with recommendations to fix the insight. 👉Read further Introducing Network Analyzer: detect service and network issues | Google Cloud Blog
PostgreSQL uses transaction IDs (also called TXIDs or XIDs) to implement Multi-Version Concurrency Control semantics (MVCC).To prevent transaction ID wraparound, PostgreSQL uses a vacuum mechanism, which operates as a background task called autovacuum (enabled by default), or it can be run manually using the VACUUM command. A vacuum operation freezes committed transaction IDs and releases them for further use. You can think of this mechanism as “recycling” of transaction IDs that keeps the database operating despite using a finite number to store the transaction ID.Vacuum can sometimes be blocked due to workload patterns, or it can become too slow to keep up with database activity. If transaction ID utilization continues to grow despite the freezing performed by autovacuum or manual vacuum, the database will eventually refuse to accept new commands to protect itself against TXID wraparound. To help you monitor your database and ensure that this doesn’t happen, Cloud SQL for PostgreSQL
Run production workloads for as low as $40/month 😀 Cloud Spanner is a relational database service that offers industry leading 99.999% availability, and near unlimited scale to handle even the most demanding of workloads. With granular instance sizing, at a much lower cost you can still get all of the Spanner benefits like transparent replication across zones and regions, high-availability, resilience to different types of failures, and the ability to scale up and down as needed without any downtime. And with Committed Use Discounts, the entry price for production workload further reduces to less than $40/month as you receive a 40% discount for a 3-year commitment.Read further https://cloud.google.com/blog/products/databases/use-spanner-at-low-cost-with-granular-instance-sizing
Well, it was in preview... But, finally it got GA 😎You can now begin deploying Spot VMs in your Google Cloud projects to start saving now. For an overview of Spot VMs, see our Preview launch blog and for a deeper dive, check out our Spot VM documentation.Modern applications such as microservices, containerized workloads, and horizontal scalable applications are engineered to persist even when the underlying machine does not. This architecture allows you to leverage Spot VMs to access capacity and run applications at a low price. You will save 60 - 91% off the price of our on-demand VMs with Spot VMs.To make it even easier to utilize Spot VMs, we’ve incorporated Spot VM support in a variety of tools. GKE for instance 🚀Read further https://cloud.google.com/blog/products/compute/google-cloud-spot-vms-now-ga
Managecore and CNT Management Consulting has teamed up to create a unique fusion of a functional and technical operational assessment for SAP customers to understand their success path to SAP S/4HANA.After the 4-Week 5-Phase assessment is concluded your organization will have the most complete picture and of not only the conversion roadmap, with project scope, timeline, and costs estimates, but how your enterprise will run on the new platform.Want to Learn More or Ready to get Started on your S/4HANA Conversion path?Visit » https://info.managecore.com/sap-s4hana-conversion-assement-managecore-cnt
We’re facing two odd (and related) problems: There doesn’t seem to be a way to establish a PTR record for our primary domain’s IP address, as it’s assigned to a GCP load balancer. A reverse lookup for the IP we’re using maps to a .bc.googleusercontent.com domain, and this domain provides a cached, persistent, NON-SSL, and Google indexed version of our side, none of which can be Good.Has anyone had any experience with either of these issues? Thanks!
Google Cloud Managed Service for Prometheus lets you monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale.Google Cloud Managed Service for Prometheus lets you monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale.Managed Service for Prometheus is Google Cloud's fully managed storage and query service for Prometheus metrics. This service is built on top of Monarch, the same globally scalable data store as Cloud Monitoring. A thin fork of Prometheus replaces existing Prometheus deployments and sends data to the managed service with no user intervention. This data can then be queried by using PromQL through the Prometheus Query API supported by the managed service and by using the existing Cloud Monitoring query mechanisms.You can explore more Here
Hey there 😊 I come here to share a blog post authored by me. Those tips can help someone to deploy Falco on GKE for sure.Falco is a HIDS tool, open source and backed by CNCF.Falco, the cloud-native runtime security project, is the de facto Kubernetes threat detection engine Falco is the first runtime security project to join CNCF as an incubation-level project. Falco acts as a security camera detecting unexpected behavior, intrusions, and data theft in real time.Amazing, huh?Make your workloads safer right now!👉 See how on Begin with Falco deployment on GKE (seiji.com.br)
Today at Google I/O, we’re thrilled to announce the preview of AlloyDB for PostgreSQL, a fully-managed, PostgreSQL-compatible database service that provides a powerful option for modernizing your most demanding enterprise database workloads.Compared with standard PostgreSQL, in our performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases. Read more: at here.
Data loss and security breaches are becoming increasingly common events in today’s world. It is not a matter of when, but if a disaster of any kind will happen. All of an organization’s information must be protected and readily available at all times in order for a business to survive. Considering this fact, the importance of backups cannot be overestimated. However, while backing up vital data is an integral part of any business’s IT strategy, having backups − whether they are a cloud backup or on-prem − is not the same as having a disaster recovery plan. Differentiating backup from disaster recovery can help you develop effective strategies for avoiding the consequences of downtime and business disruptions.Understanding the basics of backup and disaster recovery is critical for minimizing the impact of unplanned downtime on your business. Across all industries, organizations recognize that downtime can quickly result in lost sales and revenue, interrupted service, possible supply ch
Anthos Clusters on Azure and Anthos Clusters on AWS now support Kubernetes versions 1.22.8-gke.200 and 1.21.11-gke.100
With Anthos Multi-Cloud, you can create Kubernetes clusters in both AWS and Azure cloud environments. Anthos clusters integrate with cloud provider specific resources such as load balancers and persistent storage.With a single-cloud platform design, you are forced into using highly-integrated cloud-specific tools. This creates one silo for each cloud environment. With Anthos clusters, you can deploy workloads to multiple clouds with a unified management, signaling, and configuration control plane.You can manage your Kubernetes clusters on AWS and Azure from the Google Cloud console. With Connect gateway, you can connect to your clusters in any cloud with your Google Cloud identity for authentication.For more information about other Anthos clusters environments, see Anthos clusters.Let me know what you think about this latest announcement from Google in the comments section.
Cloud networking refers to the ability to connect two resources together inside a cloud, across clouds and with on-premises data centers. A cloud provider needs to provide three main types of connectivity:Site-to-cloud - Between on-premises equipment and cloud resources Site-to-site - To connect on-premises resources together VPC-to-VPC - Connectivity between cloud resourcesLet’s take a look at each one!👉 Come on source.
Login to the community
Social LoginLogin With Your C2C Credentials
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.