Learn | C2C Community

What Is Multi-CDN Architecture and What Are the Benefits of This Distribution Strategy?

Content delivery networks (CDNs), generally speaking, have two configurations: push or pull. In a push CDN, content is pushed through a primary server to users, and origin pulls CDNs, allowing users to point content at the server to then pull it through and distribute across a network. Both architectures come with their advantages and disadvantages, and enterprises should choose the right CDN for their business needs accordingly.Where CDNs differ greatly is their setup. Businesses can either centralize their CDN, designating one location for content origin, or they can create a multi-CDN architecture, distributing content delivery across multiple servers. There are benefits and drawbacks for either configuration and today we’re examining the qualities of content distribution within a multi-CDN strategy. What is Multi-CDN? In order to fully understand a  multi-CDN architecture, it’s important to first understand what a CDN is. CDN stands for “content delivery network,” which is a series of connected servers that deliver content across a network. These servers can have different geographical locations or have a central location.  A multi-CDN architecture is a type of server infrastructure that distributes content across multiple CDNs and edge servers located in different geographies. The geographical distribution of a multi-CDN strategy is significant because it is the key contributing factor to one of the biggest advantages of this distribution strategy: superior speed. As with anything, though, a multi-CDN network’s power hinges on the effectiveness of its distribution and setup.   How is a Multi-CDN Architecture Implemented?Implementing a content delivery environment of any scale requires the proper setup; for multi-CDNs in particular, the deployment method is critical for the effectiveness of the strategy. Static CDN MethodA static CDN method stores files that do not change often. For example, most sites have numerous JavaScript and CSS files that only change when developers deploy updates to the site. These static files can be cached at the CDN for fast delivery to the reader’s browser.Ratio Load-BalancingA ratio load-balancing method for multi-CDN integration, also known as weighted round-robin load-balancing, is a method that bases traffic loads on the administrator’s configurable values. The administrator uses a value such as 1, 3, 5 and the CDN load balancer sends traffic to servers based on a ratio calculation of connections each server is currently handling. Performance Load-BalancingAnother method for integrating a multi-CDN strategy is performance load-balancing. Performance load-balancing sends traffic to CDN servers based on their current performance statistics. For example, if CPU utilization is high, then a load balancer would send traffic to another server with less resource utilization.Geolocation Load-BalancingOne main advantage of a CDN is its numerous geolocated data centers. As the name suggests, geolocation load-balancing determines which server will handle a request based on the user’s geolocation data. Data transferred at closer distances will increase perceived performance and improve the user experience. Benefits of Multi-CDN Strategy As the demand for video and digital media streaming continues to grow globally, many content distributors will be forced to consider ways to optimize server infrastructure around speed. One of the many benefits of placing servers within a multi-CDN architecture is increased speed, but this isn’t the only benefit. Additionally, multi-CDNs provide superior performance, capacity, and security in content distribution.Benefit #1The benefit of a multi-CDN strategy is performance. Performance is shown to affect the user experience, bounce rate, and customer retention. For any organization on a traditional hosting platform, adding a multi-CDN to infrastructure will immediately improve performance. CDNs cache content on fast servers, so users will get high-performance delivery of their requests.Benefit #2The second benefit is the elimination of a single point of failure. Should one CDN fail in production, the secondary failover CDN can take over until the original is brought back into service. Elimination of a single point of failure reduces downtime and can even keep production uptime at 100% with a well-designed infrastructure plan.Benefit #3For organizations with global customers, the multi-CDN architecture allows you to host servers in data centers across the globe. It offers faster performance to those customers in remote locations away from the local business host servers. Bringing data centers closer to the target customer reduces the distance the data must travel, which speeds up application performance. Drawbacks of Multi-CDNs While multi-CDNs have a lot of benefits in the world of digital streaming we live in today, of course, this content distribution strategy comes with its own set of drawbacks. The cost is much higher than working with a single CDN. The organization must work with a higher budget to manage multiple CDNs for one application.The other disadvantage is its technical overhead. Administrators must be able to configure and manage the additional architecture. If they don’t have the skillset to configure multiple CDNs, then the administrators must take the time to learn and configure settings. Integrating with Google Cloud  According to Google, Cloud CDN is tightly integrated with Cloud Monitoring and Cloud Logging.”They provide “detailed latency metrics out of the box, as well as raw HTTP request logs for deeper visibility. Logs can be exported into Cloud Storage and/or BigQuery for further analysis with just a few clicks.”“As part of Google Cloud, Cloud CDN caches your content in 96 locations around the world and hands it off to 134 network edge locations, placing your content close to your users, usually within one network hop through their ISP.”Read more at the Google blog, linked below.  Is a Multi-CDN Strategy right for you? Every organization must weigh the pros and cons of a multi-CDN architecture strategy. This strategy can improve performance and reduce downtime, so an organization dependent on application uptime could leverage more pros than cons. As with any infrastructure change, even with performance enhancements, these benefits should be weighed against the additional costs to deploy multiple CDNs. Extra Credit Google CDN Blog Types of CDNs  What is a CDN

Categories:InfrastructureNetworking

Getting to Know Google Cloud's Urs Hölzle

This article was originally published on September 30, 2020.In 1999, Urs Hölzle joined Google as one of its first 10 employees and the first vice president of engineering. Twenty-one years later, he serves as the senior vice president for technical infrastructure and oversees the design, installation, and operation of the servers, networks, and data centers that power Google’s services. In sum, he is the person in charge of making all of Google’s wares available to developers around the world via Google Cloud.Hölzle is one of few people so intimately familiar with Google’s infrastructure and how it has evolved through the years to become one of the world’s largest computing systems.While his roots are in Switzerland, Hölzle received his Ph.D. from Stanford, where he invented the fundamental techniques that are used in most of today's leading Java compilers. He established himself as a professor of computer science at the University of California, Santa Barbara, before joining Google and beginning his most notable work: downloading and indexing the entire world wide web and serving it up as the ubiquitous search engine we know today.Urs HölzleUrs Hölzle: Looking Inward to Plan Forward  When Hölzle began his tenure, he and his team were tasked with engineering the computer infrastructure for Google’s search engine on what he’s referred to as “not very much money.” Of that experience, he has said: “That was 18 years of hard work.” But it’s that hard work that led to further innovation.What started off as a focus on an individual server has led to one of the largest networks of servers as well as very efficient data centers. In fact, Hölzle and his team have reduced the energy used by Google data centers to less than 50% of the industry average.Hölzle has commented that he sees Google as any other large company. “We have IT systems. We have security problems. We have compliance problems. We have HR systems…. We look at other companies really as companies that struggle with the same problems that we have struggled with.” It makes sense, then, that because Google has had the same types of problems, it is in a better position to help solve those problems. “All of these things are things that actually any 50,000-person company has, and a subset of those are things that a 1,000-person company has.” In his most current role, Hölzle works to ensure that all those who rely on Google’s servers, networks, and data centers have access, and that Google’s infrastructure can hold all of it up.The Tech Guru Aims for New HeightsMuch of Hölzle’s attention these days is on making Google’s technical infrastructure available to developers around the world through Google Cloud. In 2018, he wrote about the important goal of Internet companies to offer services that can be accessed by hundreds of millions of users--no matter where they are. “Through the years, we’ve worked hard to continually improve how we serve users in all corners of the world,” he wrote at the time. “From an infrastructure perspective, this has meant focusing on how best to route data securely, balance processing loads and storage needs, and prevent data loss, corruption, and outages.”Earlier this year, he wrote another article focused on keeping the Google network infrastructure strong amid COVID-19. He noted, “This may be a time of global uncertainty, but we're working hard to ensure the Google network is there for everyone, business or consumer, day and night.”Hölzle has made no secret about his views on the future of Google Cloud: “For the cloud to take over the world, it needs to make everyone successful.” It’s not just a product but rather an ecosystem where open source creates the standard, he contends. “With open source, you have a way to have a standard because everyone uses the same piece of code… but at the same time, you can evolve and move the ecosystem forward.”  The Upcoming Revolution in CloudIn his recent keynote address during Google Next 2020, Hölzle discussed what’s next in enterprise IT. He noted two key things: first, that enterprise innovation can catch up to the rate of consumer innovation, but only if the enterprise adopts an always up-to-date software stack that works across any cloud, as well as on premise and on the edge; and second, that Google Anthos is exactly that. “Anthos is as safe and as clear a choice as back when choosing Linux, because it runs everywhere,” he said during his keynote. “It’s based on open source and communities, and everything will run on top of that.” He added, “The beauty of Anthos is that it doesn’t try to do too much. It standardizes the things that should not be different.”When asked about his thoughts on what is missing from the cloud today, Hölzle suggested, “a lot.” Hölzle believes the cloud is still at its infancy, and there is still much more work to do. For example, he compared the cloud we know today to the first smartphone made available to the masses. “It’s like trying to imagine a phone right now without the app store. I predict in 2025, we’ll really be embarrassed about the cloud of 2020.” “The game changer will be to move away from the idea of having three different clouds we need to pick from,” he added. “Anthos combined with Kubernetes gives you open standards and flexibility. From there, you can mix and match and adopt the things that work best for you.” He added, “That is really what the next big revolution in the cloud will be.”   

Categories:InfrastructureGoogle Cloud StrategyCareers in CloudHybrid and MulticloudNetworkingInterview