C2C Monthly Recap: June 2022
- C2C News
Browse articles, resources, and the latest product updates.
On Saturday, March 12, 2022, C2C hosted a fireside chat featuring Todd Walters, Enterprise Architect at Eli Lilly, in conversation with Google Customer Engineer Cori Peele. This live, interactive session was jointly organized by C2C Global and BDPA for decision-makers weighing considerations and pursuing use cases specific to the healthcare and life sciences industry. In the course of well over an hour, Peele and Walters discussed this topic in significant depth, in the contexts of Walters’ personal career and cloud journeys and the healthcare and life sciences industry at large. Topics covered in this fireside chat include: (8:45) Todd Walters background and current role (14:00) Todd Walters cloud journey (18:00) Changes in networking and core compute infrastructure over time (34:30) Modern Application Development and CI/CD (41:10) Architectural perspectives for hybrid cloud and multi-cloud (49:00) Environmental costs of cloud computing (56:00) Example of a solution on the cloud addressing a business problem (e.g. Translate - public story) Watch the full recording of this conversation below:
“Cloud repatriation,” like “cloud migration” and “cloud native,” is a tech term borrowed from the language of social science: all of these terms describe a relationship to a place of origin. What each really describes, though, is where someone, or something, lives. In social science, that someone is a person, someone born a citizen of one country or returned there after displacement by conflict or other political circumstances. In tech, the something born in or returned to its place of origin is an asset or a resource an organization controls: it’s your organization’s data, its software, or whatever else you need to store to be able to run it.After years of cloud migration dominating the conversation about software and data hosting and storage, the term “cloud repatriation” is emerging as a new hypothetical for migrated and cloud native organizations. So many organizations are now hosted on the cloud that a greater number than ever have the option, feasible or not, to move off. Whether any cloud-native or recently migrated organization would actually want to move its resources back on-premises, to a data center, is another question. To discuss this question and its implications for the future of the cloud as a business solution, C2C recently convened a panel of representatives from three major cloud-hosted companies: Nick Tornow of Twitter, Keyur Govande of Etsy, and Rich Hoyer and Miles Ward of SADA. The conversation was charged from the beginning, and only grew more lively throughout. Sensing the energy around this issue, Ward, who hosted the event, started things off with some grounding exercises. First, he asked each host to define a relevant term. Tornow defined repatriation as “returning to your own data centers...or moving away from the public cloud more generally,” Govande defined TCO as “the purchase price of an asset and the cost of operating it,” and Hoyer defined OPEX and CAPEX as, respectively, real-time day-to-day expenses and up-front long-term expenses. Ward then stirred things up by asking the guests to pose some reasons why an organization might want to repatriate. After these level-setting exercises, the guests dove into the business implications of repatriation.The question of cost came up almost immediately, redirecting the discussion to the relationship between decisions around workloads and overall business goals: Govande’s comments about “problems that are critical to your business” particularly resonated with the others on the call. Govande briefly elaborated on these comments via email after the event. “In the context of repatriation, especially for a product company, it is very important to think through the ramifications of doing the heavy infrastructural lift yourself,” he said. “In my opinion, for most product companies, the answer would be to ‘keep moving up the stack,’ i.e. to be laser focused on your own customers' needs and demands, by leveraging the public cloud infrastructure.”These sentiments resurfaced later in the discussion, when the group took up the problem of weighing costs against potential opportunities for growth: The more the group explored these emerging themes of workload, cost, and scale, the more the guests offered insights based on their firsthand experiences as executives at major tech companies. Tornow used an anecdote about launching the game Farmville at Zynga to illustrate the unique challenges of launching products on the cloud: During the audience Q&A, a question about TCO analysis gave Hoyer the chance to go long on his relevant experiences at SADA: As soon as the conversation began to wind down, Ward put the guests on the spot again, to ask Tornow and Govande point-blank whether either of them would consider repatriation an option for their company that very day. Unsurprisingly, neither said they would: By the time Ward handed the microphone back to Dale Rossi of Google Cloud, who introduced and concluded the event, the conversation had lasted well over an hour, leaving very few angles on the subject of repatriation unexamined. Many hosts might have felt satisfied letting an event come to an end at this point, but not Ward. To leave the guests, and the audience, with a sense of urgency and resolve, he treated everyone on the call to a rendition of “Reveille,” the traditional military call to arms, arranged exclusively for this group for solo Tuba: Repatriation may not be a realistic option for many if not most businesses, but discussing the possibility hypothetically illuminates the considerations these same businesses will have to confront as they approach cloud strategy and workload balance. “Nobody on our panel had heard of anyone born in the cloud ever going ‘back’ to the data center,” Ward said in an email reflecting on the event. “Any infrastructure cost analysis is a ‘complex calculus,’ and there's no easy button.” For Ward, there is one way to make this complex calculus manageable: “To get maximum value from cloud, focus in on the differentiated managed services that allow you to refocus staff time on innovation.”When you hear the word “repatriation,” what comes to mind for you? What does it imply for your organization and the workloads your organization manages? Are there any relevant considerations you consider crucial that you want to talk through in more depth? Join the C2C Community and start the conversation! Extra Credit:
The full recording from this C2C Deep Dive includes panel discussion on:Defining terms for repatration, total cost of ownership (TCO), operational expenditures (OPEX), and capital expenditures (CAPEX) Understanding motivations, payoff, and pitfalls of repatriating workloads off of cloud Workload considerations from applied knowledge at Twitter and EtsyWho spoke at this event? Miles Ward CTO, SADA Rich Hoyer Director of Customer FinOps, SADA Keyur Govande VP Infrastructure and Chief Architect, Etsy Nick Tornow Platform Lead, Twitter
Google Cloud provides virtual machines (VMs) to suit any workload, be it low cost, memory-intensive, or data-intensive, and any operating system, including multiple flavors of Linux and Windows Server. You can even couple two or more of these VMs for fast and consistent performance. VMs are also cost-efficient: pricier VMs come with 30% discounts for sustained use and discounts of over 60% for three-year commitments.Google’s VMs can be grouped into five categories. Scale-out workloads (T2D) If you’re managing or supporting a scale-out workload––for example, if you’re working with web servers, containerized microservices, media transcoding, or large scale java applications––you’ll find Google’s T2D ideal for your purposes. It’s cheaper and more powerful than general-purpose VMs from leading cloud vendors. It also comes with full Intel x86 CPU compatibility, so you don’t need to port your applications to a new processor architecture. T2D VMs have up to 60 CPUs, 4 GB of memory, and up to 32 Gbps networking.Couple T2D VMs with Google Kubernetes Engine (GKE) for optimized price performance for your containerized workloads. General purpose workloads (E2, N2, N2D, N1) Looking for a VM for general computing scenarios such as databases, development and testing environments, web applications, and mobile gaming? Google’s E2, N2, N2D, and N1 machines offer a balance of price and performance. Each supports up to 224 CPUs and 896 GB of memory. Differences between VM types E2 VMs specialize in small to medium databases, microservices, virtual desktops, and development environments. E2s are also the cheapest general-purpose VM. N2, N2D, and N1 VMs are better equipped for medium to large databases, media-streaming, and cache computing. Limitations These VMs don’t come with discounts for sustained use. These VMs don’t support GPUs, local solid-state drives (SSDs), sole-tenant nodes, or nested virtualization. Ultra-high memory VMs (M2, M1) Memory-optimized VMs are ideal for memory-intensive workloads, offering more memory per core than other VM types, with up to 12 TB of memory. They’re ideal for applications that have higher memory demands, such as in-memory data analytics workloads or large in-memory databases such as SAP HANA. Both models are also perfect for in-memory databases and analytics, business warehousing (BW) workloads, genomics analysis, and SQL analysis services. Differences between VM types: M1 works best with medium in-memory databases, such as Microsoft SQL Server. M2 works best with large in-memory databases. Limitations These memory-optimized VMs are only available in specific regions and zones on certain CPU processors. You can’t use regional persistent disks with memory-optimized machine types. Memory-optimized VMs don’t support graphic processing units (GPUs). M2 VMs don’t come with the same 60-91% discount of Google’s preemptible VMs (PVMs). (These PVMs last no longer than 24 hours, can be stopped abruptly, and may sometimes not be available at all.) Compute-intensive workloads (C2) When you’re into high-performance computing (HPC) and want maximum scale and speed, such as for gaming, ad-serving, media transcribing, AI/ML workloads, or analyzing extensive Big Data, you’ll want Google Cloud’s flexible and scalable compute-optimized virtual machine (C2 VMs). This VM offers up to 3.8 GHz sustained all-core turbo clock speed, which is the highest consistent performance per core for real-time performance. Limitations You can’t use regional persistent disks with C2s. C2s have different disk limits than general-purpose and memory-optimized VMs. C2s are only available in select zones and regions on specific CPU processors. C2s don’t support GPUs. Demanding applications and workloads (A2) The accelerator-optimized (A2) VMs are designed for your most demanding workloads, such as machine learning and high-performance computing. They’re the best option for workloads that require GPUs and are perfect for solving large problems in science, engineering, or business. A2 VMs range from 12 to 96 CPUs, offering you up to 1360 GB of memory. Each A2 has its own amount of GPU attached. You can add up to 257 TB of local storage for applications that need higher storage performance. Limitations You can’t use regional persistent disks with A2 VMs. V2s are only available in certain regions and zones. V2s are only available on the Cascade Lake platform. So: which VM should I choose for my project? Any of the above VMs could be the right choice for you. To determine which would best suit your needs, take the following considerations into account: Your workload: What are your CPU, memory, porting and networking needs? Can you be flexible, or do you need a VM that fits your architecture? For example, if you use Intel AVX-512 and need to run on CPUs that have this capability, are you limited to VMs that fit this hardware, or can you be more flexible? Price/performance: Is your workload memory-intensive, and do you need high performance computing for maximum scale and speed? Does your business/project deal with data-intensive workloads? In each of these cases, you’ll have to pay more. Otherwise, go for the cheaper “general purpose” VMs. Deployment planning: What are your quota and capacity requirements? Where are you located? (remember - some VMs are unavailable in certain regions). Do you work with VMs? Are you looking for a VM to support a current project? Which VMs do you use, or which would you consider? Drop us a line and let us know! Extra Credit:
When organizations need to pivot to a different process or adopt different tools to enable more productivity, they can tend to leap into that new system without first conducting up-front research to determine its feasibility. This method of adoption is possible, but it can cause many decision-makers to pivot again after a few months once unforeseeable costs come to the forefront.Take cloud adoption and virtualization, for example. In the early 2000s, companies like Google, Amazon, and Salesforce introduced web-based services to manage digital workloads and make computing more efficient. Quickly, companies adopted multi-cloud or hybrid cloud solutions to manage their businesses and protect their employees’ and clients' information.Now the workforce is going through another revolution. Working from home is more common, many aspects of our day-to-day lives are digital, and companies have a greater need for the level of security and compliance that only private cloud infrastructures can offer. Why, then, has there been such a shift in recent years toward cloud repatriation? Read on to find out more about measuring cloud computing costs and building a cloud computing infrastructure that enables your team to work more efficiently. Measuring Cloud Computing Costs Has Caused Many CIOs to Reconsider Their Cloud Solution Early adopters have the benefit of being at the forefront of the latest technology and innovation. However, being an early adopter comes with its risks, and many CIOs and decision-makers who quickly merged their company’s processes and assets with the cloud are starting to measure their cloud computing costs and choosing to repatriate.When cloud computing is costly, misuse is often to blame. Used incorrectly, cloud computing can seem to cost more, but planning the provision process and accurately configuring assets can correct this miscalculation. Most cloud providers deliver reports and suggestions to help administrators reduce costs.Every major cloud provider uses calculators to estimate costs. Even after provisioning, watch your cloud usage and review configurations. Most cloud configurations can be adjusted to lower budgets and scale resources back. What is TCO in Cloud Computing? One of the first steps of building a cloud computing infrastructure is calculating the foreseeable costs of the move. To do so, decision-makers can use total cost of ownership (TCO) as a helpful metric to compare the cost of their current infrastructure to prospective costs of going hybrid or multi-cloud.But what is TCO in cloud computing? And is it a useful tool for weighing the cost-effectiveness of application modernization? Total cost of ownership refers to the total associated costs of an asset. This includes purchase price, adaptation, and operation. In cloud computing, specifically, TCO refers to all of the associated costs of purchasing and operating a cloud technology.Several factors make up TCO, including administration, capacity, consulting fees, infrastructure software, and integration. To properly calculate TCO, administrators must create a plan for migration and factor in the costs of maintaining the environment after the business relies on cloud resources. Conducting a Cloud TCO Analysis & Determining ROI Another important metric in cloud migration cost analysis is ROI, or return on investment. Many stakeholders and decision-makers may be familiar with ROI as a business term, but less familiar with the term in the context of cloud computing.TCO measures ROI. After the initial investment, the cost savings should be greater than the costs of running the environment every month. Cost savings will be higher than the initial investment if the company runs with a lower budget than it did using on-premise resources.An organization’s ROI is impacted by more than just the cost of infrastructure. It’s also impacted by performance, availability, scalability, and the human resources necessary to maintain it. For example, the costs of running cloud resources every month could be cheaper than on-premise costs, but slow systems reduce productivity and could cost more in constant bug fixing and troubleshooting. Measuring the Risks of Cloud Repatriation After conducting a TCO analysis on your cloud solution, you may realize that there’s room for improvement, or savings, in your cloud strategy. But repatriation, or shifting from a public cloud model to an on-premise private server, comes with its own host of risks and potential migration costs that CIOs and company leaders will need to assess in determining when to shift and when to stay. Repatriation CostsRepatriation is the process of “reverse migration,” which means bringing data and applications back in-house. The costs of repatriation add strain to an IT budget, so migration back to on-premises infrastructure must be planned. Costs include the bandwidth required to migrate data and applications, the hardware necessary to support users and services, security tools, the personnel needed to support and maintain the resources, and any downtime costs. Administrators usually avoid repatriation unless it’s necessary due to the costs, training, and downtime associated with migration. Security & Compliance RisksOne of the most popular reasons for building a cloud computing infrastructure on public platforms is security assurance and compliance. However, this solution may not continue to be feasible for smaller organizations as the cost of cloud services continues to rise. If cloud resources are not configured properly, data breaches can occur. Small organizations with few security resources may find that the risks associated with migration, including compliance regulations surrounding cloud-hosted data, outweigh the associated savings. Consider Your Previous Cloud Migration StrategyYour original cloud migration strategy will play a big role in determining the feasibility of repatriation. For instance, if your team migrated by replatforming, it may be too expensive or time consuming to move back on-prem. Conversely, if your organization took a more “lift-and-shift” approach, there may be an opportunity for you to shift back, if doing so won’t compromise security and compliance.It’s not uncommon for organizations to try cloud migration with limited sample data and applications, and then later move more critical applications. The previous plan and migration process should be analyzed, and lessons learned should be carried into the next migration. This next migration should be smoother, with less downtime. With a test migration, a large overhaul of your system migrated to the cloud could potentially cost less.Do any of these concerns resonate with you? Are you thinking about moving your workloads off the cloud? Come to our Deep Dive on cloud repatriation on January 20, 2022:
Organizations with all kinds of storage and hosting needs are adopting cloud infrastructure as a preferred solution. For these organizations, this means lower costs, faster speeds, and enhanced performance in general. What does this mean for the teams managing this infrastructure? In many cases, it means adapting to new strategies and a new environment. Among the most popular of these strategies right now is containerization, and among the most popular of these environments is Kubernetes.Mattias Gees is a solutions architect at Jetstack, a cloud-native services provider building enterprise platforms using Kubernetes and OpenShift. With Kubernetes and Containerization gathering momentum as a topic of interest among our community members and contributors in recent months, we wanted to invite Gees to share what he has learned as a containerization specialist using Kubernetes on a daily basis. Gees and other representatives of Jetstack, a C2C Platinum partner, were excited to join us for a Deep Dive on the topic to share some of these strategies directly with the C2C community.Gees started his presentation with some background on Jetstack, and then offered a detailed primer on Kubernetes and its capabilities for containerizing on the cloud. This introduction provided context for Gees to introduce and explain a series of containerization strategies, starting with load balancing on Google Kubernetes Engine:Another strategic solution Gees pointed out was one that has also been a frequent topic of discussion within our community, Continuous Delivery and Continuous Deployment (CD):Kubernetes is a complex and dynamic environment, and different cloud engineers and architects will use it in different ways. To give a sampling of the different potential strategies Kubernetes makes available, Gees listed some advanced Kubernetes features, including health checks, storage of config and secrets in Kubernetes objects, basic autoscaling, and advanced placement of containers:https://vimeo.com/645382667The most impressive segment of Gees’ presentation was his overview of the Kubernetes platform, including a screenshot of his own cloud-native landscape:Gees concluded the presentation with a breakdown of the different team roles associated with Kubernetes modernization, stressing that implementing the many containerization strategies he identified is not the work of one person, but many working in concert toward common goals:Are you an architect working with Kubernetes in a cloud-native environment? Do you prefer any of these containerization strategies? Can you think of any you’d like to add? Reach out to us in the community and let us know! Extra Credit:
With migration to the cloud continuing across the public and private sectors at an accelerating rate, stories of successful migration projects are becoming especially timely and valuable. Organizations considering migration want to hear from organizations that have executed the process successfully. As these stories emerge with increasing frequency, sharing them within and among communities like C2C becomes not only natural but necessary.As we initially reported this October, NextGen Healthcare recently partnered with Managecore to simultaneously migrate their SAP applications from a private to a public cloud infrastructure and upgrade to the SAP HANA database. This was an ambitious migration project, and given the regulations around NextGen’s personally identifiable data, failure was not an option. Despite these unique considerations, the team completed the project in under six months. On October 28, 2021, C2C’s David Wascom connected with Karen Bollinger of NextGen Healthcare and Frank Powell of Managecore for a virtual C2C Navigator event exploring the background and the details of this successful project.The conversation began the way a migration process itself begins: the team established customer goals. When Wascom asked what customers typically want from a migration, Powell offered three main goals common to organizations considering migration: greater stability, lower fees and personnel costs, and “time to innovate and do new things for their organization.”After wrapping up this high-level overview, Wascom asked Bollinger and Powell for a more detailed description of the migration process. Bollinger outlined the main phases of the migration period, from moving the infrastructure from cloud to cloud, to updating the landscape to the latest service pack, to moving everything into the HANA database. Powell stressed the importance of the preliminary phase of the migration, including testing and defining SAP strategy.The discussion became most lively when Wascom asked Powell and Bollinger about their data security strategy. As a healthcare provider, NextGen is beholden to HIPAA and attendant ethical and legal considerations concerning data security. “Security is on everyone’s mind, even on-prem,” said Powell. Bollinger was equally unequivocal, if not more so. “I have no choice,” she said. “I’m in healthcare.”What does it take to migrate a massive quantity of sensitive data successfully and securely? According to Bollinger, it takes a trusted partner. “What I was looking for was a partner,” she said. “A third-party partner that we could have these conversations with.” The sentiment resonated with Wascom, who added, “The fact that you were able to work towards a common goal is a hugely powerful story.” Powell agreed wholeheartedly. For him, partnership is not just a goal, it’s a requirement. “As a service provider, our goals have to align with our customers,” he said. “If they don’t, then right from the get-go, we have failed.”When Wascom asked Bollinger and Powell for final reflections and advice for other executives considering migrating their own organizations, both responded positively and succinctly. The biggest takeaway for Bollinger? “It can be done.” Powell was similarly encouraging. “Talk to someone who’s been successful at it,” he said. “Use those as your reference points.” The reason for this, in his words, was just as simple: “We’re dealing with some pretty amazing technology.”C2C brings people like Bollinger and Powell together to demonstrate the potential of cloud technology for organizations seeking solutions and success. How is your organization hosting its software and data? Have you considered a migration to the cloud, or to a different cloud infrastructure? Would you like to hear from other organizations where similar projects have been successful? Reach out and let us know what you’re thinking, and we’ll incorporate your thoughts as we plan future discussions and events. Extra Credit:
Personal development and professional development are among the hottest topics within our community. At C2C, we’re passionate about helping Google Cloud users grow in their careers. This article is part of a larger collection of Google Cloud certification path resources.The Google Cloud Professional Cloud Architect is a key player on any team that wants to activate the full benefits of Google Cloud within its organization. According to Google, “this individual designs, develops, and manages robust, secure, scalable, highly available, and dynamic solutions to drive business objectives.” Candidates need to have proficient knowledge of cloud strategy, solution design, and architecture best practices before taking this exam.The Cloud Architect debuted in 2017 and quickly became the leading competitive advantage certification that cloud job-seekers can hold; for three years in a row, Global Knowledge has placed the Google Professional Cloud Architect at or near the top of its 15 top-paying IT certifications. The salary from holding this certification doesn’t exist in a bubble, however. Global Knowledge’s report includes additional analysis on its respondents, including average number of additional certifications, average age of the certification-holder, and popular cross-certifications (some of which also place high on the list). That said, we already know from the Associate Cloud Engineer overview that any Google Cloud certification is a substantial value boost in the job market.Now, for anyone who wants to break into that market, let’s get the basics out of the way. These certifications are well-compensated for a reason, so make some time to prepare and answer the following questions before sitting for this challenging exam:What experience should I have before taking this exam? What roles and job titles does Google Cloud Professional Cloud Architect certification best prepare me for? Which topics do I need to brush up on before taking the exam? Where can I find resources and study guides for Google Cloud Professional Cloud Architect certification? Where can I connect with fellow community members to get my questions answered?View image as a full-scale PDF here. Looking for information about a different Google Cloud certification? Check out the directory in the Google Cloud Certifications Overview. Extra CreditGoogle Cloud’s certification page: Professional Cloud Architect Example questions Exam guide Coursera: Preparing for Google Cloud Certification: Cloud Architect Professional Certification Pluralsight: Google Cloud Certified Professional Cloud Architect AwesomeGCP Professional Cloud Architect Playlist Global Knowledge IT Skills and Salary Report 2020 Global Knowledge 2021 Top-Paying IT CertificationsHave more questions? We’re sure you do! Career growth is a hot topic within our community and we have quite a few members who meet regularly in our C2C Connect: Certifications chat. Sign up below to stay in the loop.
Cloud-first companies see cloud-native Kubernetes technology as the key to building modern application infrastructure with high scalability and internal developer process automation. While many companies have started their Kubernetes adoption, the program can often encounter challenges and complexities soon after its launch.The recording from this Deep Dive includes:(2:00) Introduction to Jetstack (3:35) Agenda overview (4:00) Introduction to cloud native, Kubernetes, and microservices (9:45) Kubernetes and monolith application servers (14:00) Continuous delivery, GitOps, and advanced Kubernetes features (17:25) Picking the right workloads to migrate (19:55) Building application platforms to run on Kubernetes (23:30) Knowledge sharing and understanding the team needed to run Kubernetes projects (26:10) Final takeawaysThis Deep Dive was presented by Jetstack, a foundational platinum partner of C2C and a Google Cloud Premier Partner which builds enterprise cloud-native platforms using Kubernetes and OpenShift. To connect with them, find @RichardC here in the community.
Managecore, a Foundational Gold Partner of C2C and Premier Google Cloud Partner, recently collaborated with NextGen Healthcare to migrate SAP to host on Google Cloud. In less than six months, Managecore supported moving NextGen’s SAP workloads in addition to upgrading to the latest version of HANA Here to discuss the project on the C2C virtual stage were panelists from each company:Karen Bollinger — Vice President Business Applications, NextGen Healthcare Frank Powell — President/Partner, Managecore Key Discussion Points:An introduction to NextGen Healthcare and the problems they were trying to solve by introducing an hyperscaler to their SAP environment and partnering with Managecore Using managed services from Google Cloud to open up new agile business opportunities and improved performance, confidence, stability, and availability Considerations for security and HIPAA compliance when migrating a healthcare company’s SAP data workloads to a new cloud environmentWatch the entire conversation here:
In 2019, the public cloud services market reached $233.4 billion in revenue. This already impressive number is made even more impressive by the fact that this was a 26% year-over-year increase from the previous year; a strong indication that app modernization and cloud migration continue to be winning strategies for many enterprises.But which cloud strategy should a decision-maker choose? When should they migrate their legacy applications into a hybrid, multi-cloud, or on-premise architecture? There may not be single definitive answers to these questions, but there are certainly different options to weigh and considerations to make before officially adopting a new process. Read on to find out more about multi-cloud vs hybrid cloud strategies for startups, and join the conversation with other cloud computing experts in the C2C Community. What is a Hybrid Cloud Strategy? A hybrid cloud strategy is an internal organization method for businesses and enterprises that integrates public and private cloud services with on-premise cloud infrastructures to create a single, distributed computing environment.The cloud provides businesses with resources that would otherwise be too expensive to deploy and maintain in house. With on-premise infrastructure, the organization must have the real estate to house equipment, install it, and then hire staff to maintain it. As equipment ages, it must be replaced. This whole process can be extremely expensive, but the cloud gives administrators the ability to deploy the same resources at a fraction of the cost. Deploying cloud resources takes minutes, as opposed to the potential months required to build out new technology in house. In a hybrid cloud, administrators deploy infrastructure that works as an extension of their on-premise infrastructure, so it can be implemented in a way that ties into current authentication and authorization tools. What is a Multi-Cloud Strategy? Conversely, a multi-cloud strategy is a cloud management strategy that requires enterprises to treat their cloud services as separate entities. A multi-cloud strategy will include more than one public cloud service and does not need to include private services, like in the case of hybrid cloud. Organizations use a multi-cloud strategy for several reasons, but the primary reasons are to provide failover and avoid vendor lock-in. Should one cloud service fail, a secondary failover service can take over until the original service is remediated. It’s an expensive solution, but it’s a strategy to reduce downtime during a catastrophic event. Most cloud providers have similar products, but administrators have preferences and might like one over another. By using multiple cloud services, an organization isn’t tied to only one product. Administrators can pick and choose from multiple services and implement those that work best for their organizations’ business needs. What is the Difference Between a Hybrid and Multi-Cloud Strategy? Though the differences might be slight, choosing the wrong cloud strategy can impact businesses in a big way, especially those just starting out. One of the primary differences between a hybrid and a multi-cloud strategy is that a hybrid cloud is managed as one singular entity while a multi-cloud infrastructure is not. This is largely due to the fact that multi-cloud strategies often include more than one public service that performs its own function.Additionally, when comparing multi-cloud vs. hybrid cloud, it’s important to note that a hybrid cloud will always include a private cloud infrastructure. Now, a multi-cloud strategy can also include a private cloud service, but if the computing system is not managed as a single entity, it is technically considered both a multi-cloud and a hybrid cloud strategy.Infrastructure is designed differently, but the biggest significance is cost. Hosting multi-cloud services costs more than using one service in a hybrid solution. It also requires more resources to support a multi-cloud environment, because it’s difficult to create an environment where services from separate providers will integrate smoothly with each other, and requires additional training for any staff unfamiliar with cloud infrastructure. Which Cloud Strategy Has the Most Business Benefits? Every cloud strategy has its benefits, and most organizations leverage at least one provider to implement technology that would otherwise be too costly to host in-house. For a simple hybrid solution, use a cloud service that provides a majority of the resources needed. All cloud services scale, but you should find one that has the technology that you need to incorporate into workflows.Multi-cloud is more difficult to manage, but it gives administrators better freedom to pick and choose their favorite resource without relying on only one provider. A multi-cloud strategy also provides failover should a single provider fail, so it eliminates the single point of failure that most hybrid solutions experience. A cloud provider has minimal downtime, but downtime occasionally happens. With a multi-cloud strategy, administrators can keep most business workflows working normally until the primary provider recovers.It’s hard to stand squarely on the side of one cloud strategy over another. Every business has its own unique variables and dependencies that may make a hybrid model more desirable than multi-cloud, or vice versa. The benefits of an on-premise cloud infrastructure may also outweigh those of both hybrid and multi-cloud. The decision to go hybrid or adopt a multi-cloud strategy resides with the decision-makers of said enterprise. There are, however, some considerations businesses of any size and lifecycle can take into account before finalizing the decision. What to Consider When Switching to a Hybrid Cloud Strategy Before choosing a provider, you should research each provider’s services, feedback, and cost. It’s not easy to choose a provider, but the one integrated into the environment should have all the tools necessary to enhance workflows and add technology to the environment. A few key items that should be included are: Authorization and authentication tools Speed and performance metrics Backups and failover within data centers Different data center zones for internal failover Logging and monitoring capabilities Usage reports Convenient provisioning and configuration Most cloud providers have a way to demo their services, or they give users a trial period to test products. Use this trial wisely so that administrators can determine the best solution for the corporate environment. Multi-Cloud Vs. Hybrid Cloud for StartupsAgain, deciding between a multi-cloud strategy vs. hybrid cloud strategy depends on the needs of the company. For startups, there may need to be a greater emphasis on security and disaster recovery, in which case a multi-cloud management strategy would provide a company at the beginning of its lifecycle the protection it needs to grow.Conversely, to bring up one of the key differences between a hybrid cloud and multi-cloud strategy, if an entity uses private cloud services, a hybrid cloud model would provide the startup with the flexibility it needs to make changes to their computing infrastructure as they become more established. Do Startups Benefit From an On-Premise Cloud Infrastructure?The short answer is yes, startups can benefit from an on-premise cloud infrastructure. Taking any services in-house, whether it's managing payroll or IT services, can help reduce costs and give businesses more visibility into their workflow. If there is a need to hold on to an on-premise cloud infrastructure, a multi-cloud strategy will allow that enterprise to maintain that computing system while also managing additional public cloud services separately. What Does the Resurgence of IT Hardware Mean for Cloud? Even though cloud adoption has been surging for some time among businesses (Gartner reported in 2019 that more than a third of organizations view cloud investments as a “top 3 investing priority”) IT hardware and in-house services have also experienced a resurgence in popularity. Many believe this new phenomenon, referred to as cloud repatriation by those in the IaaS (Infrastructure as a Service) industry, is the result of a lack of understanding around proper cloud management and containerization among IT decision-makers. They may initially make the choice to migrate certain applications into a hybrid cloud strategy only to abandon the effort because of workload portability. In light of this shift, hyphen-cloud strategies, like multi-cloud vs. hybrid cloud, still reign supreme as a cost effective and secure way to manage legacy applications and workloads. It may take a fair amount of planning and strategizing to decide which cloud strategy matches the company lifecycle to which it applies, but cloud adoption certainly isn’t going anywhere any time soon.
DoiT, a global multi-cloud software and managed service provider with deep expertise in Kubernetes, machine learning and big data hosted a webinar with AMD and Google Cloud to discuss key differences between Amazon Redshift and BigQuery. For Startups evaluating their Cloud options, this is an excellent conversation that covers common questions like, “Why should I move to the cloud?” and “What are the best options for me, multi-cloud, hybrid, or all cloud?” and of course, any question related to financing the expense. Watch the video below to hear from Matthew Porter, Senior Cloud Architect with DoIT International, Meryl Hayes, East Coast Team Lead at DoIT International, John Mansperger, Principal Solutions Architect at AMD and Dan Chang, Enterprise Partner Sales Manager at Google Cloud. Thank you to our partner, DoIT International and 2020 Google Cloud Global Reseller Partner of the Year, for sharing this webinar with the C2C Community.
Cloud security is an emerging technology, and even some of the most seasoned professionals in the cloud community are still learning how it works, or at least thinking about it. If all of your data is stored on the cloud, and all of your apps are running on it, you want to know that those apps and that data are secure, and knowing that the cloud is an open, shared environment might not be an immediate comfort. Luckily, the cloud offers all kinds of security resources you can’t access anywhere else. Understanding how these resources can protect your data and assets is crucial to doing the best work possible in a cloud environment. Vijeta Pai is a C2C contributor and Google Cloud expert whose website Cloud Demystified provides comics and other educational content that makes cloud security accessible and intelligible to the average Google Cloud user. C2C recently invited Pai to give a presentation and host a discussion on all things cloud security, from threat modeling to shared responsibility arrangements to best practices, drawing on her work with Cloud Demystified as well as the content she’s published on the C2C blog. Watch her full presentation below, and read on for some of the key conversations from her C2C Talks: Cloud Security Demystified. After providing some background on types of cloud providers (public, private, and hybrid) and the different elements of cloud security (technologies, processes, controls, and policies), Pai broke down the STRIDE threat model. This model defines every type of cybersecurity attack a cloud security system might be required to prevent. The six types are Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Watch below for Pai’s breakdown of the definitions and associated security considerations of each one: Next, Pai explained the different possible models used to share the responsibility for security between an organization and a cloud provider. The three models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), and each allocates responsibility for people, data, applications, and the operating system (OS) differently: Pai kicked off the open discussion portion with a comprehensive review of cloud security best practices, which referred back to a post she wrote for the C2C blog, 10 Best Practices for Cloud Security in Application Development. As she does in the post, Pai went through these strategies one by one, from Identity and Access Management Control to Data Encryption to Firewalls. For anyone in the process of actively implementing their cloud security measures, Pai’s full answer is worth the watch: A unique opportunity for C2C members is the ability to ask questions directly to the experts, and Pai fielded several questions about specific aspects of the technology of Google Cloud itself. The first question came from C2C member Dickson Victor (@Vick), who was concerned with whether the cloud can support better security than an on-premise system. Pai’s answer spoke to the heart of the issue for most prospective cloud users: the policies, processes, and resources available in an open environment like the cloud versus those available in a locked, private system. Her response was nothing but encouraging: Pai also took a moment to let C2C community member Lokesh Lakhwani (@llakhwani17) plug the Google Cloud Security Summit, the first-ever tech summit on cloud security: The discussion wrapped up with a question about cybersecurity insurance and whether it might become an entire industry once cloud security becomes a new standard. Pai wasn’t sure how quickly the industry would explode. Still, she thinks there is room out there for growth and innovation, precisely because of the extent to which technology has become a necessary part of day-to-day life for so many people living through the COVID-19 pandemic, including Pai’s mother, who lives and works in India. Moreover, the more we live our lives on the cloud, the more we will need cloud security, which, to Pai, means there is plenty of opportunities right now for cybersecurity insurance companies to make their mark: Do you have questions or concerns about cloud security that Pai didn’t answer in this session? Feel free to share them in the comments and also to connect with Pai directly. You can find her on LinkedIn or join C2C to keep up with her work and get in touch with other tech professionals working in the cloud security field.
Michael Pytel (@mpytel), co-founder and CTO at Fulfilld, shares stories from the team’s wins and losses in building out this intelligent managed warehouse solution.The recording from this Deep Dive includes:(2:20) Introduction to Fulfilld (4:10) The team’s buildout requirements for a cloud-based application, including language support, responsiveness, and data availability (9:15) Fulfilld’s Android-based scanner’s capabilities and hardware (12:25) Creating the digital twin with anchor points (14:50) Microservice architecture, service consumption, and service data store (19:35) Data store options using BigQuery, Firestore, and CloudSQL (23:35) Service runtime and runtime options using Cloud Functions (28:55) Example architecture (30:25) Challenges in deciding between Google Cloud product options (31:40) Road map for the warehouse digital assistant, document scanning, and 3D bin packing algorithm (39:00) Open community questions Community Questions AnsweredWhat does the road map include for security? Did using Cloud Functions help with the system design and partitioning codings tasks by clearly defining functions and requirements? Do you give your customers access to their allocated BigQuery instance? What type of data goes to Firestore versus CloudSQL?Other ResourcesGoogle Cloud Platform Architecture Framework Google Cloud Hands-On Labs on Coursera Google Cloud Release Notes by ProductFind the rest of the series from Fulfilld below:
Let’s go wild here. Say you’re uncertain whether to keep your brain as is. You think a certain Harry Potter-type surgeon could create a better version. But you're not sure. You’re also afraid this surgeon will scotch up your brain while he fiddles on improvements. So you have the surgeon construct cabinets in your skull - these cabinets are on the periphery of your brain - where he does all his work. If his virtual brains are better than the brain you have now, the surgeon replaces your brain with his creations. If they’re not, the surgeon continues producing virtual brains in his cabinets until you’re satisfied with his results.Those cabinets are called virtual machines. The layer that overrides and organizes these cabinets as well as giving the surgeon more room to work in, is called the hypervisor.Virtual MachinesIn the computer world, we have the hardware which is the equivalent of your body, and the software, the equivalent of your brain, that drives the body. Now, say you want to improve some existing software but are afraid that tinkering on it could irreversibly destroy the original system. Computer engineers solved that problem by building one or more virtual machines—or virtual cabinets—(like mini labs) where they tinker on their prototypes, called instances, while the original stays intact.HypervisorAt one time, this software tool was called the “supervisor”. It’s the additional digital layer that connects each of your virtual machines (VMs), supervises the work being done in the VMs, and separates each VM from the other. In this way, your instances are organized and your VMs are rendered coffin-tight to outside interference, protecting your instances, or innovations. You’ve got two types of hypervisors: Those that sprint side-by-side the VMs and those that shimmy on top. In either case, the hypervisor serves as an “alley” for storing additional work information.Amazon’s Nitro HypervisorNine years ago, Amazon Web Services (AWS) noticed that very soon software developers would have a problem. The hypervisor system was wasteful; they consumed too much RAM, they yielded inconsistent results, and their security would be challenged with the accelerating bombarding software.“What we decided to do,” Anthony Liguori, principal software engineer of Amazon and one of the key people who planned and executed the venture told me, “was to completely rethink and reimagine the way things were traditionally done.”The VMs and hypervisors are software. So, too, all the elements—input/output (I/O) functionalities—are integral to these systems. What AWS did was tweeze out each of these I/Os bit by bit and integrate them into their dedicated hardware Nitro architecture, using a novel silicon produced by Israeli startup Annapurna Labs.Today, all AWS virtualization happens in hardware instead of software, shaving management software costs and reducing jitter to microseconds.Since 2017, more companies have emulated AWS and likewise migrated most of their virtualization functionalities to dedicated hardware, in some cases rendering the hypervisor unnecessary. This means all virtualization could now be done from their hardware tech stack without need of a hypervisor.Bottom LineVirtual machines are for deploying virtualization models, where you can build and rebuild instances at your pleasure while protecting the original OS. The hypervisor operates and organizes these VMs and stores additional work information. In the last few years, AWS developed its revolutionary Nitro virtualization system, where software VMs and hypervisors were transmogrified into dedicated hardware form. In this way, working on instances becomes cheaper, faster and more secure. Innovations also unfurl faster, since both VM and hypervisor layers are eliminated. More vendors, like VMWare, Microsoft and Citrix, emulated Amazon and introduced their own so-called bare metal hypervisors, too. These hypervisors are called Type 1.Meanwhile, Google Cloud uses the security-hardened Kernel-based Virtual Machine (KVM) hypervisor. KVM is an open source virtualization technology built into Linux, basically turning Linux into a system with both hypervisor and virtual machines (VMs). Although it’s Type 2 (since it runs on top of an OS), it has all the capacities of a Type 1.Let’s Connect!Leah Zitter, PhD, has a Masters in Philosophy, Epistemology and Logic and a PhD in Research Psychology.
Many of the applications we interact with every day are supported by hybrid and multi-cloud infrastructure. And while this modernization creates many benefits in security, scalability, and continuous testing processes, effectively managing these sometimes disparate environments can be complicated. Enter: Anthos, Google’s multi-services platform launching CI and applications operations into the future of cloud computing.Continuous integration (CI) is integral to DevOps and modern hybrid cloud infrastructure. It allows developers and operations teams to orchestrate software packaged and deployed in containers. Instead of manually configuring and deploying software, Anthos empowers DevOps with tools that automate deployments, speed up delivery of applications, and give DevOps access to cloud-native services that save time and money. What Is Anthos? Anthos is a managed application platform that was first introduced in 2019 when Google combined some of its cloud services with Google Kubernetes Engine (GKE) to create a system for unifying and modernizing application operations. Kubernetes is a Google product used to orchestrate application deployment to the cloud. Software deployed in Kubernetes is packaged into a container and any configurations and sent to the Google Cloud Platform (GCP). Using Kubernetes, DevOps can eliminate human errors and automate configurations during deployment. Automation is one of the most significant advantages of using Anthos.DevOps uses Anthos to automate deployments to cloud-native environments in containers, which is the major component in microservices technology. Microservices break down large monolithic codebases into smaller components so that they can be individually managed and updated. The advantage of deploying to the cloud using Anthos is the speed of deployments, but GCP also offers performance improvements over on-premise infrastructure. Applications run on edge servers across Google data centers, so users across the globe will see a significant performance improvement regardless of their location. Anthos Components & Strategies for App Modernization While Kubernetes workload management is still very much at the heart of the platform, many other cloud technologies come together to create Anthos’ different components and build a tech ecosystem that is revolutionizing CI/CD: GKEOne of Anthos’ main components is managing Kubernetes workloads. Kubernetes is an enterprise solution for container deployments. In large enterprise environments, constant monitoring, deployments, and recreation of containers require excessive resources. GKE will manage these resources and ensure that the environment runs smoothly and without performance degradation. GKE On-PremiseOrganizations often work with hybrid solutions in large environments, meaning services run on-premise and in the cloud. GKE can be installed on-premise so that services can be deployed internally and in the cloud. You must have the infrastructure installed to run GKE on-premise, but GKE can manage both on-premise containers and those that run in the cloud. Istio Service MeshIn addition to supporting Kubernetes architecture, Anthos also gives developers and operators greater connectivity through a federated network. When organizations leverage microservices using containers, they decouple the large codebase into smaller units. These smaller units must “speak” to each other, and the Istio Service Mesh provides pathways to communicate these microservices. StackdriverOne of the biggest benefits of app modernization is full-stack observability and greater system health management. With Anthos, logging, tracing, and system monitoring are centralized within the platform, creating an opportunity for continuous deployment testing. GCP MarketplaceShould any organization find a pre-made package in the Google Cloud Marketplace that will help with productivity, organization administrators can easily install packages from the marketplace to their cloud. These applications are configured to run on GCP, so no configurations of virtual machines, storage, and network resources are necessary. GCP Cloud InterconnectAnother Anthos component that actively assists in app modernization is GCP Cloud Interconnect. In a hybrid environment, data from the on-premise network must sync with the cloud. Organizations must also upload data to storage devices in the cloud. GCP Cloud Interconnect provides a high-performance virtual private cloud network to transfer data between environments securely. How Can Anthos Multi-Cloud Help Modernize Application Operations?Such a large component of modern app development takes place in hybrid and public cloud environments. This calls for a need to streamline operational processes and introduce app modernization throughout the development cycle from deployment to testing and system monitoring. Here are just some of the ways Anthos is revolutionizing cloud-native ecosystems: Robust Observability & Shift Left TestingAnthos multi-cloud is building a more innovative coding environment through the ability for quicker testing, or shift-left testing, and great observability. Observability finds its origins deep within machine diagnostics, but what is the shift-left testing? This phenomenon of placing testing earlier in the development process is known as shift left testing, and it allows developers and operators to improve the quality of their code deployments.Part of app modernization is shortening the distance between different development steps. With Anthos, system logs and traces are centralized with the Stackdriver technology to place testing power in the hands of developers and operators, not just insights teams. Anthos Enables Greater FlexibilityInstead of working with a monolithic codebase with rigid platform requirements, Google Anthos and microservice technology provide greater flexibility to deploy containers across a hybrid environment. Deploy to the cloud, on-premise infrastructure, and even a developer device without extensive configuration management and time-consuming bug fixes. Building Operational ConsistencyAutomation keeps deployments consistent and reduces human error from manual code promotions to production. Because configurations and code are packaged within a container and maintained in Kubernetes, every deployment remains the same and keeps code consistent across the cloud and on-premise infrastructure. What Is the Future of Anthos Multi-Cloud & Hybrid CI/CD Environments? Automation in DevOps continually proves to be the future in faster code deployments, reduced human errors, and better consistency and performance in application execution. Continuous integration and delivery (CI/CD) can speed up deployments from several weeks to a few minutes in high-performance DevOps. As organizations realize that cloud microservices offer better performance and faster code deployments, Anthos will evolve to be a beneficial service to any enterprise with a hybrid or multi-cloud environment. Anthos allows DevOps teams to fully automate their deployments, giving your developers time to innovate and operations people more time to maintain and upgrade infrastructure. It saves time and money across all business units. It gives your organization the ability to maintain flexibility regardless of the software deployed and the platform required to run applications. Extra Credit https://cloud.google.com/anthos https://www.cuelogic.com/blog/what-is-anthos-the-multicloud-platform-by-google https://www.techzine.eu/blogs/cloud/48197/what-is-google-anthos-is-this-the-modern-cloud-infrastructure-youre-looking-for/ https://www.infoworld.com/article/3541324/what-is-google-cloud-anthos-kubernetes-everywhere.html https://www.acronis.com/en-us/articles/top-10-benefits-of-multi-cloud/
Dan Stuart, SVP of IT Services at Southwire Company, joined C2C on the virtual stage. Stuart shared how the 71-year-old manufacturing company set the bar in its industry by moving the mission-critical SAP ECC workload environment to Google Cloud.Key discussion points:What business problem were you trying to solve by moving to the cloud, and what aspects of cloud infrastructure are most important to you? How did you determine the right way to approach these challenges, and why was Google Cloud the solution to that? With the advent of cloud and Southwire’s move, how do you equip your team with cloud-related tools and skills? In what ways do you see Southwire Company taking advantage of other Google Cloud offerings in data, analytics, AI, ML, or industry-specific solutions for manufacturing? Having completed the project in July 2020 and now about a year post-migration, what has been the biggest payoff?Watch the entire conversation here: Want to learn more? Join us on May 26 for a technical overview with the Southwire team.
Look back on Earth Week 2021 with the C2C Community. This panel discussion was hosted between C2C community members to share their companies’ initiatives toward sustainability.L’Oreal kicked off the panel with their tips and tricks on a green cloud. Shared by:Herve Dumas, Group Chief Technology Officer Antoine Castex, GCP Architect Lead22d Consulting told the story of how sustainability was built into the core of their business DNA. Shared by:Dominik Kugelmann, Chief of Vision & Co-founder Marie Touchon, Customer Success ManagerThoughtWorks delivered a short presentation on how development teams can reduce cloud carbon emissions. Shared by:Dan Lewis-Toakley, Green Cloud Lead & Senior Developer Consultant Danielle Erickson, Senior Consultant Developer Links shared by the community:Cloud Carbon Footprint and its Cloud Carbon Footprint Repository Digital Sobriety : A responsible corporate approach Google Cloud Region Picker Lean ICT: Towards Digital Sobriety Why green cloud optimization is profitable for you and the planet (Thoughtworks)
Look back on Earth Week 2021 with the C2C Community, where Google Cloud developer advocates Stephanie Wong and Alexandrina Garcia-Verdin, and cloud sustainability lead, Chris Talbott, recorded a live episode for their GCP Podcast. Google first achieved carbon neutrality in 2007, and since 2017 Google has purchased enough solar and wind energy to match 100% of our global electricity consumption. Now Google’s building on that progress to target a new sustainability goal: running their business on carbon-free energy 24/7, everywhere, by 2030. In this recording, hear about Google’s new Cloud Region Picker sharing data about how they are performing against that objective so that you can select Google Cloud regions based on the carbon-free energy supplying them. Links shared by the community:Carbon free energy for Google Cloud regions Google Cloud Region Picker Google Environmental Report 2019
Data centers house the computing power we need for many of our favorite cloud-based applications and services. With this power comes the increase in carbon emissions and the destruction of wildlife and the environment. To help reduce energy consumption and carbon emissions, Google continues to improve its data centers and stays committed to using carbon-free energy to reduce reliance on fossil fuels. Why CFE? In the early years of cloud computing, many large Silicon Valley tech companies recognized the need to reduce the amount of energy required to power their equipment. The goal for tech companies like Google is to reduce their reliance on fossil fuels, lower their carbon footprints, and stem from the “Go Green” movement.With a carbon-free emissions (CFE) effort, data centers use renewable energy and function mainly on hydroelectric power. Both renewable energy and hydroelectric power produce no carbon emissions, which is better for the environment. To help display to the public that they are doing their part, many of the more prominent tech companies with data centers worldwide provide metrics that show their initiatives. Using CFE Metrics for Business Companies such as Google publish these metrics for the general public, but it also helps businesses determine where to run their applications. The higher the CFE rating, the more likely you can run an application in the specific region without affecting emission results. Google suggests running applications in zones with the highest CFE metrics, but using scripts to automate processes at a specific time in the day will also reduce the energy necessary to power applications.In particular, Google indicates that their U.S.-based Iowa and Oregon data centers produce the least amount of carbon emissions. If you can run your application in these zones without harming application performance, a majority of your computing power would not produce any carbon. According to Google, its best cloud locations in terms of CFE are Sao Paulo, Brazil (87%), Finland (77%), Oregon, U.S. (89%). Choosing Google Cloud Regions with Low CFE To contribute to a lower carbon footprint, you would choose a data center with the best CFE rating, but there are complications for some application owners. Selecting a region away from most users will increase latency and lower performance for your users. Also, application owners who must focus on compliance are restricted to specific regions. These concerns could make it challenging to choose an area with the best CFE rating.Google has some basic suggestions to help you choose a region and run applications with lower carbon emissions: Pick a region with the best CFE rating where you can permanently run your new applications. Run batch jobs in regions with the best CFE rating. Batch jobs usually run during off-peak hours and can run anywhere without affecting your application performance. Set a corporate policy around choosing data centers with a high CFE rating. Improve the efficiency of cloud resources so that they do not take as much computing power to run. Bottom Line Google has been committed to CFE since 2007, but the tech giant plans to be completely carbon-free by 2030. You, too, can do your part by choosing a data center that works with renewable energy and uses equipment that reduces the need to rely on fossil fuels. By working with data centers with a high CFE rating, you do your part to preserve the environment. Extra Credit https://cloud.google.com/sustainability/region-carbonhttps://datacenternews.us/story/google-cloud-publishes-carbon-emissions-data-for-every-cloud-regionhttps://www.computerweekly.com/news/252497993/Google-Cloud-shares-carbon-free-datacentre-energy-usage-stats-with-users
The Google Cloud Tech channel on YouTube published this fun, multi-video “choose your own adventure” experience to explain the different options for scaling your cloud infrastructure. In this Choose Your Own Cloud Adventure, YOU get to make the choices for the engineers at the (fake) new startup, EatAndRun. EatAndRun specializes in pairing runners with the best food options while they’re on-the-go. The company has gained national attention and user traffic is skyrocketing! Bukola’s and Max’s application isn’t ready to handle this massive spike, so they’ve proposed Cloud CDN OR managed instance groups (MIGS) as viable solution candidates. Should they scale up their backend services or speed up the delivery of their assets? The choice is up to you! Which do you choose?Load Balancers and Managed Instance Groups Content Delivery Network and Caching Share your thoughts below on which option you would choose and why.Have experience with this use case? We’d love to hear your story.
Whether you’re an experienced coder or an app development novice, software packages like Kubernetes and Docker Swarm are two great tools that can help streamline virtualization methods and container deployment. As you search for an orchestration tool, you will come across two common platforms: Kubernetes and Docker Swarm. Docker dominates the containerization world, and Kubernetes has become the de-facto standard for automating deployments, monitoring your container environment, scaling your environment, and deploying containers across nodes and clusters. When comparing Docker with Kubernetes, the main difference is that Docker is a containerization technology used to host applications. It can be used without Kubernetes or with Docker Swarm as an alternative to Kubernetes.While both architectures are massively popular in the world of container orchestration, they have some notable differences that are important to understand before choosing one over the other. Today, we’re discussing Kubernetes vs. Docker Swarm’s different containerization capabilities to help teams and engineers choose the exemplary architecture for their app development purposes. What Is an App Container? To fully understand the differences between Docker and Kubernetes, it’s essential to understand what is an app container. In software development, a container is a technology that hosts applications. They can be deployed on virtual machines, physical servers, or on a local machine. They use fewer resources than a virtual machine and interface directly with the operating system kernel rather than via hypervisor in a traditional virtual machine environment, making containers a more lightweight, faster solution for hosting applications. Application containers allow apps to run simultaneously without the need for multiple virtual machines in traditional environments, freeing up infrastructure storage space and improving memory efficiency.Many large tech companies have switched to a containerized environment because it’s faster and easier to deploy than virtual machines. Container technology runs on any operating system, and it can be pooled together to improve performance. What Are Kubernetes and Docker? Kubernetes and Docker Swarm are two popular container orchestration platforms designed to improve app development efficiency and usability. Both Kubernetes and Docker Swarm bundle app dependencies like code, runtime, and system settings together into packages that ultimately allow apps to run more efficiently.Kubernetes is an open-source container deployment platform created by Google. The project first began in 2014, while Docker Swarm was invented one year earlier by Linux in 2013 to improve app development’s scalability and flexibility. Still, both projects come with different architectural components with different app development capabilities that fuel the Kubernetes vs. Docker Swarm debate.Kubernetes Architecture ComponentsA critical difference between Kubernetes and Docker Swarm exists in the infrastructures of the two platforms. Kubernetes architecture components, for instance, are modular; The platform places containers into groups and distributes load among containers, alleviating the need to run applications in the cloud. This is different from Docker in that the Docker Swarm architecture utilizes clusters of virtual machines running Docker software for containerization deployment. Another main difference between the two platforms is that Kubernetes itself can run on a cluster. Clusters are several nodes (e.g., virtual machines or servers) that work together to run an application. It’s an enterprise solution necessary for performance and monitoring across multiple containers.ScalabilityAnother difference between Kubernetes and Docker Swarm is scalability. Should you decide to work with other container services, Kubernetes will work with any solution allowing you to scale into different platforms. Considered an enterprise solution, it will run on clusters where you can add nodes as needed when additional resources are required.DeploymentDocker Swarm is specific to Docker containers deploying without any additional installation on nodes. With Kubernetes, however, a container runtime is necessary for it to work directly with Docker containers. Kubernetes uses container APIs with YAML to communicate with containers and configure them. Load BalancingLoad balancing is built into Kubernetes. Kubernetes deploys pods, which comprise one or several containers. Containers are deployed across a cluster, and the Kubernetes service performs load balancing on incoming traffic.Docker Swarm Architecture ComponentsDocker Swarm architecture has a different approach for creating clusters for container orchestration. Unlike Kubernetes that uses app containers to distribute the load, Docker Swarm consists of virtual machines hosting containers and distributing them.ScalabilityDocker Swarm is specific to Docker containers. It will scale well with Docker and deploy faster than Kubernetes, but you are limited to Docker technology. Consider this limitation when you choose Docker Swarm vs. Kubernetes.DeploymentWhile the Docker Swarm architecture allows for much faster, ad-hoc deployments when compared to Kubernetes, Docker Swarm has more limited deployment configuration options, so these limitations should be researched to ensure that it will not affect your deployment strategies.Load BalancingThe DNS element in Docker Swarm handles incoming requests and distributes traffic among containers. Developers can configure load balancing ports to determine the services that run on containers to control incoming traffic distribution. Difference Between Docker and Kubernetes To recap, while Kubernetes and Docker Swarm have many similar capabilities, they also differ significantly in their scalability, deployment capabilities, and load balancing. Kubernetes vs. Docker Swarm ultimately comes down to an individual developer or team’s need to scale or streamline aspects of their containerization deployment, whether those processes would be better suited to a platform capable of speedy deployments like Docker Swarm or flexibility and load balancing like Kubernetes.When to Use KubernetesGoogle developed Kubernetes for deployments that require more flexibility in configurations using YAML. Because Kubernetes is so popular among developers, it’s also a good choice for people who need plenty of support with setup and configurations. Another good reason to choose Kubernetes is if you decide to run on Google Cloud Platform because the technology is effortlessly configurable and works with Google technology.Kubernetes is an enterprise solution, so its flexibility comes with additional complexities, making it more challenging to deploy. However, once you overcome the challenge of learning the environment, you have more flexibility to execute your orchestration.When to Use Docker SwarmBecause Docker Swarm was built directly for Docker containers, it’s beneficial for developers learning containerized environments and orchestration automation. Docker Swarm is easier to deploy, so that it can be more beneficial for smaller development environments. For small development teams that prefer simplicity, Docker Swarm requires fewer resources and overhead. Extra Credit https://searchitoperations.techtarget.com/definition/application-containerization-app-containerization https://www.docker.com/resources/what-container#:~:text=A%20container%20is%20a%20standard,one%20computing%20environment%20to%20another.&text=Available%20for%20both%20Linux%20and,same%2C%20regardless%20of%20the%20infrastructure. https://www.sumologic.com/glossary/docker-swarm/#:~:text=A%20Docker%20Swarm%20is%20a,join%20together%20in%20a%20cluster.&text=The%20activities%20of%20the%20cluster,are%20referred%20to%20as%20nodes. https://thenewstack.io/kubernetes-vs-docker-swarm-whats-the-difference/ https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
Leadership, change management, and Google Cloud go-to-market strategy, the first C2C Navigator of the year with Bob Evans, creator of Cloud Wars Media, gave members answers to the hottest topics informing tomorrow’s decision-making. The Cloud Wars Top 10 Reports demonstrate the radiant success Goole Cloud is shining on the cloud ecosystem. Now in the Top 3, Evans and Sabina Bhasin(@ContentSabina) discussed what Google Cloud is doing right and where they will be going in 2021 and beyond. Key Discussion Points:Google Cloud regional strategies and localization tactics Lessons on agility and pivoting when it matters most Three things that organizations can do to ensure a smooth transition to digitization Losing to win — why the $5.06B is excellent news. The power and purpose behind hearing and responding to the “voice of the customer” and why the C2C community winsCommunity Questions Answered: How do the company leaders react when you move them down the list? With the advent of the cloud, how important are skills in a specific vertical (such as healthcare) when it comes to looking for cloud architect positions? What are some of the key elements that cloud leaders need to put into place today in order to be at the top of the list three years from now? After talking with CEOs and their leadership teams over this past year, what is the number 1 thing you were most surprised about that was on their agenda? Watch the full conversation here: Stay tuned for a full breakdown of the key moments from this discussion, including video clips and resources. Coming soon!
C2C Deep Dives invite members of the community to bring their questions directly to presenters.Do you have questions about all the options for securing communication between serverless compute products on Google Cloud? In this C2C Deep Dive, Guillaume Blaquiere (@guillaume blaquiere), cloud architect at Sfeir, covered OAuth 2 token usages between access token and identity token, virtual private cloud (VPC) access and private networks access, load balancers, ingress, and egress. Watch the video to learn how you can start taking control of your serverless infrastructure, and see how Guillaume answers the following common security questions:What about the patch management? How do you manage the network? How do you ensure HA? How do you control the access “from” and “to” the service? How do you mitigate DDoS?Download the slides.
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.OK
Sorry, our virus scanner detected that this file isn't safe to download.OK