Learn | C2C Community

The Human Side of Things: How Connecting in the C2C Community Helped Workspot and Wealthsimple Connect To Workers Around the World

Wealthsimple is an online investment management service based in Toronto, Canada. The company provides a range of financial services for investors, including custom portfolio development and regulated crypto trading, but it also provides a tax filing service open to all users. This means that every year come tax season, Wealthsimple does enough business to warrant hiring an entire cohort of remote workers.At the beginning of 2023, Wealthsimple was looking for a vendor. The company needed a Virtual Desktop Infrastructure (VDI) partner to service about a hundred new staffers working on Windows systems. A team met with Google to source some candidates, and Google suggested C2C partner Workspot, a cloud-native VDI technology provider specializing in delivering virtual apps and desktops to end users.Wealthsimple initially considered three other vendors in addition to Workspot. Its main priority was logistics: shipping laptops to remote workers just for tax season, a contract that might last only a few months, could involve numerous headaches. Workspot’s modern VDI solution was a natural workaround. Wealthsimple was interested, but before committing, the team decided to attend the C2C Cloud Adoption Summit in Toronto, where the Workspot team would also be onsite. The experience made all the difference.“We’re all very personable people, so that really helped, meeting in person and getting that face-to-face connection, hanging out a little bit. That really sealed the deal for us from a human side of things,” says Wealthsimple Infrastructure Project Coordinator Jessica Boyle. “We decided that Workspot was going to be our solution, and we’ve been running with it ever since.”Workspot, meanwhile, had maintained a presence in Toronto for some time. “We make a joke about Rob [Scola, Director Sales, Workspot] (@Rscola) that he basically lives in Toronto because he’s literally always up there,” says Bogdan Petrescu (@bogdan.petrescu), Workspot Senior Director of Google Cloud Partnerships. After the two companies met at the Cloud Adoption Summit, when Workspot was returning for a 2Gather event in the Fall and looking for a customer to collaborate on a presentation, Wealthsimple was a natural choice. “Wealthsimple was one of a few customers that we were working with in Toronto, but probably the company that we were furthest along with and had built the best relationship with as a result of that facetime.” “That face-to-face connection really sealed the deal for us from a human side of things.” Wealthsimple and Workspot eventually joined C2C’s October 3 2Gather in Toronto to give a joint presentation on using Workspot's modern, cloud-native VDI solution for hybrid and remote work. Wealthsimple IT Specialist Bruno Ramos was initially very nervous about speaking onstage. “I was shaking,” he says. Having been connected with Workspot for the better part of the year, though, helped alleviate that tension. “It seemed like we were really good buddies, everyone on stage, really easy conversation, really easy talk…it felt like family.”The presentation also led to a huge influx of interest in Workspot’s product and the solution they had created for Wealthsimple. “At the end, I spoke to probably seventy people. So many questions that they had in terms of how Workspot is, why did we choose Workspot specifically, our use case, so many questions, and I know for a fact that other companies inquired about Workspot, and specifically they spoke to Rob,” Bruno says. Bogdan confirms this account: “Indeed, we had two or three companies reach out to us afterwards like, ‘Hey, we’d like to explore this and see if it matches what we’re looking into.’”The team attributes this success to two things: the tight-knit nature of the tech community in Toronto, and the community atmosphere of C2C’s events. “Everybody knows someone that knows someone, so we do talk a lot throughout several events,” Bruno says of the city. “There is a really tight community, and we do speak about everything that we do and then compare to other companies, what they do.”“The community piece really sticks out too in C2C,” Jessica adds. “It’s very comfortable, and you can really feel that sense of community with people. It’s not scary…some tech events are a little bit more daunting, and you don’t talk as much to other people, but C2C gives that environment where you’re comfortable to do that.” Extra Credit:  

Categories:Infrastructure

Enterprise Software and the Role of the CIO: An Interview with Mark Templeton of Workspot

Mark Templeton spent more than twenty years at Citrix Systems before leaving in 2015. Since then, he has served as an executive and board member at numerous companies in tech and beyond. In 2020, he joined C2C partner Workspot as Chairman and Director, which soon brought him into the C2C community as a guest at in-person events hosted in collaboration with Workspot. Mark’s years of experience and expertise as a leader in the IT space make his presence in our community a singular benefit to our members. To make that experience and expertise available to the rest of the C2C membership, we sat down with Mark for a wide-ranging conversation about the current enterprise software market and what software CIOs can do to help their businesses scale for long-term success. What's the most important trend currently defining the enterprise software/services market? We’re looking at a world now, if you're a CIO, where you have to have an opinion about the things you can control and the things you can’t, and how each of those contributes to the platform on which your business is built. That means you have to build systems that have the agility and flexibility and resilience to things that are out of your control, whether it’s a hurricane, a flood, any of these sort of geophysical things. Having a community of like-minded individuals to rely on is also important. Groups like C2C bring together technology and cloud users from across the globe to identify the latest trends on a daily basis. One ongoing discussion is about how the acceleration of enterprise SaaS across every segment in enterprise computing has become the reality. I think most enterprise CIOs have embraced it at some level, but it’s actually accelerating and picking up speed. What’s behind this acceleration of SaaS in recent years is of course the pandemic, and evolving work styles. Another huge element is the maturing of hyperscale clouds. It’s been amazing what’s happened in just the last five years in terms of the number of new services and what’s possible. The C2C community and team has facilitated discussions on this issue and will continue to do so as this issue gains steam. What makes this trend so significant? Consumerization of software is at the heart of driving and changing IT mindsets from meeting the minimum to actually going beyond and providing a consumer-like experience when it comes to enterprise computing. By adopting this “consume it yourself approach” it allows ITs energy, budget, etc. to be directed at business solutions to drive growth, reduce costs, increase velocity, protect IP, engage, retain talent, and more. Which vendors are leading this field? The one Google Cloud customer that’s a standout to me is Equifax. I served on the board of directors of Equifax for about ten years, including through the breach, and we had all of our own data centers and infrastructure, and we really were underestimating two things: our ability to stay ahead of the hackers and the bad guys and the role of hyperscale clouds when it came to security - which still holds true for lots of IT organizations. We came to understand that our less than ten global data centers were a fraction of the footprint and attack vector size of Google Cloud. Every day Google Cloud is attacked a thousand times more than our data centers, and as a result, as we all know, seeing attacks, detecting attacks, understanding attacks is what allows you to get ahead.We realized at Equifax we were never going to replicate that kind of a security model and competency, so we decided to go all in on Google Cloud, and not only has it created an amazing security consciousness and capability, it’s also allowed the company to create new business models. Equifax has been able to build a very sizable incremental business on Google Cloud as well as really accelerate the security profile of all of the existing business, so they’re definitely a leader.Some other examples are Intel and AMD, both of whom are C2C Global partners, because compute is a core part of any hyperscale cloud. Because of the AI and ML and the need to pipeline process tremendous amounts of data collaboratively across CPUs and GPUs, all of a sudden now, the network has become the new backplane for the worldwide computer.There are a lot of winners in the areas of enterprise SaaS and systems of record like ERP, HR, CRM, Salesforce, Workday, ServiceNow, etc. One that’s near and dear to my heart is the whole idea of the enterprise PC becoming a SaaS offering. That’s called a Cloud PC, and there leaders in Microsoft, Amazon, and Workspot, where all of a sudden with a PC computing utility you can use appliance-like devices for access, do it anywhere, and get a PC and workstation-like experience on them. There are new leaders being minted every single day that are too numerous to list. How should a CIO approach this trend? I think the first big approach the CIO has to take is to drive a mindset of change and growth, and do that by rejecting the inexorable power of inertia. Inertia is the number one thing that keeps organizations from embracing change. if you don’t have a CIO that’s got that tone at the top, everything else becomes difficult.CIOs must also embrace enterprise SaaS and dig in with the SaaS provider to understand how it’s built, because it does matter. You have to insist on cloud-native, because so many legacy ISVs have moved to the cloud with a lift-and-shift model, and the lift-and-shift model lacks a whole bunch of the native capabilities that clouds offer, like elasticity, like being very resilient to failure, and more.The third thing is not to adopt hybrid as a permanent state. The reason not to buy it is––I use this example: a plug-in hybrid. It has electric engines and it has gasoline engines. All of a sudden you took an automobile that has a hundred thousand parts and now you have a hundred and twenty-five thousand parts. That’s not the way to increase availability and reliability and lower costs. It’s about eliminating moving parts, so that’s why hybrid is a transitional tool in my opinion.The fourth thing in terms of approach is you’ve got to exist on multicloud support. You have to deeply inspect the cyber characteristics, and the readiness of the offering around cyber. And then I would sort of wrap up with something I think is close to the top of the list, and that is insist on full observability and transparency. That means you’ve got to really look at the telemetry that the SaaS offering has so that it can feed systems and allow you to answer the kind of business questions you need to answer when you’re at the C-suite table. Service levels around costs, around what systems are in greater use and what systems are in lesser use. There are just a lot of these business questions that I think today are pretty difficult for IT CIOs to answer which is why we rely on communities like C2C to connect and find answers. Extra Credit:  

Categories:InfrastructureInterview

Cloud Cost Reduction: Our Top 9 Cloud Cost Optimization Strategies

Cloud cost optimization is the process of minimizing resources without impacting performance and scalability of workloads within the cloud. Cloud cost best practices are rooted in determining methods to eliminate costs by identifying unwanted resources and scaling services accurately. There are many external factors including inflation and a changing labor market that force businesses to restructure financial priorities.Though many models of cloud computing offer flexible payment structures and pay-as-you go methods, cloud cost optimization strategies will allow businesses to tighten their grip on controlling resources. Additionally, cloud cost optimization tips will also highlight if the amount of resources being used are in alignment with the infrastructure and business goals of an organization.The following are strategies that will help run applications in the cloud at lower costs. 1. Eliminate Resources One of the most simple, yet effective cloud cost-saving strategies is to eliminate resources that are not fully benefiting a business. For example, users may allocate a service to a workload that is temporary. Once the project is complete, the service may not be eliminated instantly by an administrator, resulting in unwanted costs for the organization. A solution would be to examine the cloud infrastructure and look for servers that are no longer needed within the environment if they aren’t serving business needs. Cloud cost optimization strategies are not just about eliminating spending but also ensuring that costs are in alignment with an organization’s objectives. If a particular server or project is no longer serving a business, eliminating this resource will be beneficial as it enhances cloud infrastructure optimization.This can be accomplished through routine scanning and testing to identify resources that are idle. 2. Rightsize Services Rightsizing services is allocating cloud resources depending on the workload. By allocating resources, rightsizing allows users to analyze services and adjust them to the appropriate size depending on the needs of a business. By evaluating each service task and modifying the size until it matches a specific need, cloud computing services will maximize capacity at the lowest possible cost, resulting in cloud cost reduction. In addition, many businesses rely on vendors to deploy cloud resources when they do not understand operational goals. The solution to this problem is to develop rightsizing approaches that are customized to your business, strengthening cloud resource optimization. Customized approaches develop transparency by creating a clear view of what resources are needed for your specific cloud infrastructure. Rightsizing services will also assist with analyzing the volume of certain metrics that are being used and can inform business decision makers to either upgrade or terminate specific services. 3. Create A Budget Develop a clear budget between engineers, product managers, and other team members in regard to utilizing cloud computing services by setting a monthly budget rather than an arbitrary number. Building a culture that is rooted in transparency and cost awareness will also influence how users utilize cloud services. 4. Monitoring Cloud computing platforms may have some small incremental changes when it comes to pricing. However, users should keep an eye out for any unexpected spikes that may impact cloud spend optimization and overall spending. A solution here would be implementing an alert when cloud computing costs are going over the budget. Detecting the root of these large increases and analyzing the cause can also ensure that overspending on this particular factor will not occur again, allowing for stronger cloud cost control. 5. Select Accurate Storage Options Organizations need to consider many factors when selecting an appropriate storage option.  Performance, security needs, and cost requirements are all components that should be taken into consideration when selecting an appropriate storage model. Selecting a storage tier that is user-friendly and is also aligned with a budget is critical to cloud cost efficiency. Storage tiers that are underused should also be removed for cloud cost reduction purposes. 6. Use Reserve Instances (RI’s) If an organization is using resources for a specific amount of time, consider purchasing a cloud cost optimization tool, such as a reserved instance. These are prepaid services that are discounted and are similar to saving plans that are ideal for steady workloads that have a clear timeline. When purchasing an RI, the organization selects the type, a region and a time frame that may vary depending on the purchase. 7. Manage Software License Costs Software licenses can often have high costs and monitoring them can be challenging in regard to cloud cost management. There are often forgotten fees that are associated with licenses and  many organizations face the risk of paying for licenses that they have stopped using. Conducting a thorough software audit will not only help you to understand what software is being used within the business, but it will also demonstrate what software is critical and what licenses are no longer needed. 8. Define Clear Metrics Highlight what metrics are most important to your organization. Metrics, such as, performance, availability, and cost can also help to create reports and dashboards that outline activity in the cloud. Major cloud providers have a process whereby metrics are tagged which allow an organization to create a detailed report that provides insight on cloud cost analysis. These reports should be used to track spending as they outline trending patterns in regard to finances. 9. Schedule Cloud Services It is common for organizations to have services that are idle and not being used during certain times of the day. Reduce spending by scheduling services during specific time slots in order for them to be fully used. A duty scheduler tag can be used, and the scheduled services will then be implemented. Leveraging a heatmap can also help to establish when services are being underused in order to determine an effective scheduling arrangement. SADA, an organization that serves as a cloud consultant and helps other businesses in their own cloud journey, recognizes how effective this strategy can be. SADA’s Director of Customer FinOps, Rich Hoyer, states that “Of these strategies, we have found that scheduling cloud services’ runtimes are often one of the largest overlooked savings opportunities we encounter. Specifically, non-production workloads, such as testing, development, etc., are commonly left running full-time, 24/7, instead of being scheduled to run only when used. The potential savings of running those workloads only during business hours are often surprisingly large, and they can usually be realized via simple automation and modest revisions to maintenance schedules. The first step is to analyze exactly what is being spent on these resources during the hours they sit idle. The number is often large enough to quickly motivate the implementation of a workload scheduling regime!”

Categories:InfrastructureGoogle Cloud StrategyIndustry SolutionsCloud OperationsGoogle Cloud PartnersFinancial Services

Building for Scalability, Block by Block: An Interview with Carrefour Links CTO Mehdi Labassi

When I ask Mehdi Labassi (@Mehdi_Labassi), CTO of Carrefour Links, what he does outside of work, the first thing he mentions is his family. Mehdi spends a lot of his free time playing with his kids. Sometimes they play video games on Nintendo Switch, but they also enjoy hands-on activities like building with Legos. Lego is a popular interest among tech practitioners building products on Google Cloud––after all, the four letters in “Lego” can also be used to spell “Google.” This connection turns out to be a fitting point of departure for an examination of Mehdi’s journey to a decision-making role on the technical team at Carrefour Links.Mehdi began his career as a software engineer, working first in air travel and then moving on to Orange, “the one major telco in France.” At Orange, Mehdi led the company’s Google Cloud skills center and took part in a major migration to Google Cloud from a historically on-premises infrastructure. “We had a really strong on-prem culture, so we had our own data centers, our own Hadoop clusters with thousands of machines, and the shift to cloud-based services was not something natural,” he says. “There was a lot of resistance, and we needed to really show that this gives us something.”Proving the value of the Cloud to a historically on-prem organization required zeroing in on a specific technical limitation of the existing infrastructure: “As I was driving the big data platforms and the recommendations, I do remember we had a lot of issues in terms of scalability.” Google Cloud turned out to be the perfect solution to this problem. “Then we tried the Cloud, and we found that instant scalability,” says Mehdi. “That’s another level compared to what we had on prem, so this is really the proof by experimentation.” “You assemble and program the thing, and then you need to understand how each brick works.” When Carrefour introduced Carrefour Links, its cloud-hosted retail media and performance platform, in Spring of 2021, Mehdi was immediately interested in getting involved. He reached out directly to the executive team and joined as CTO three months after the company declared the platform. “I joined when the thing just got in production, the first version, the V1. That was kind of a proof of concept,” Mehdi says. In the time since––only a little over a year––the venture has grown considerably: “We have a lot more data from different verticals, everything that’s related to transactions, to the supply chain ecosystem, to finance, a lot more insights, and we are exploring machine learning, AI use cases… so we are scaling even in terms of use cases.”Even a fast-growing platform run on Google Cloud, however, will encounter challenges as it continues to scale. “The first thing is the ability to scale while keeping FinOps under control,” Mehdi says. As he sees it, this is a matter of “internal optimization,” something he believes Carrefour Links handles particularly well. “The second thing is how to provide what I call a premium data experience for our customers, because we are dealing with petabyte-scale pipelines on a daily basis, and however the end user connects to our data solutions, we want him to have instantaneous insights,” he adds. “We leverage some assets and technologies that are provided by Google Cloud to do this.”These are challenges any technical professional managing products or resources on the cloud is likely to face. Overcoming these challenges is also what makes new solutions on the cloud possible. What competencies do IT professionals need to be able to overcome these challenges and pursue these solutions? According to Mehdi, “a good engineer working on the cloud, with this plethora of tools, he needs to be good at Lego.” Mindstorms, Lego’s line of programmable robots, he explains, require a lot of the same skills to build as machines and systems hosted on the cloud. “You assemble and program the thing, and then you need to understand how each brick works,” he says. “I really find a lot of similarities between these activities and what we are doing in our day job.” Extra Credit:  

Categories:InfrastructureIndustry SolutionsDatabasesRetail

Getting Maximum Value from Cloud: Key Takeaways from C2C's Deep Dive on Cloud Repatriation

“Cloud repatriation,” like “cloud migration” and “cloud native,” is a tech term borrowed from the language of social science: all of these terms describe a relationship to a place of origin. What each really describes, though, is where someone, or something, lives. In social science, that someone is a person, someone born a citizen of one country or returned there after displacement by conflict or other political circumstances. In tech, the something born in or returned to its place of origin is an asset or a resource an organization controls: it’s your organization’s data, its software, or whatever else you need to store to be able to run it.After years of cloud migration dominating the conversation about software and data hosting and storage, the term “cloud repatriation” is emerging as a new hypothetical for migrated and cloud native organizations. So many organizations are now hosted on the cloud that a greater number than ever have the option, feasible or not, to move off. Whether any cloud-native or recently migrated organization would actually want to move its resources back on-premises, to a data center, is another question. To discuss this question and its implications for the future of the cloud as a business solution, C2C recently convened a panel of representatives from three major cloud-hosted companies: Nick Tornow of Twitter, Keyur Govande of Etsy, and Rich Hoyer and Miles Ward of SADA. The conversation was charged from the beginning, and only grew more lively throughout. Sensing the energy around this issue, Ward, who hosted the event, started things off with some grounding exercises. First, he asked each host to define a relevant term. Tornow defined repatriation as “returning to your own data centers...or moving away from the public cloud more generally,” Govande defined TCO as “the purchase price of an asset and the cost of operating it,” and Hoyer defined OPEX and CAPEX as, respectively, real-time day-to-day expenses and up-front long-term expenses. Ward then stirred things up by asking the guests to pose some reasons why an organization might want to repatriate. After these level-setting exercises, the guests dove into the business implications of repatriation.The question of cost came up almost immediately, redirecting the discussion to the relationship between decisions around workloads and overall business goals:  Govande’s comments about “problems that are critical to your business” particularly resonated with the others on the call. Govande briefly elaborated on these comments via email after the event. “In the context of repatriation, especially for a product company, it is very important to think through the ramifications of doing the heavy infrastructural lift yourself,” he said. “In my opinion, for most product companies, the answer would be to ‘keep moving up the stack,’ i.e. to be laser focused on your own customers' needs and demands, by leveraging the public cloud infrastructure.”These sentiments resurfaced later in the discussion, when the group took up the problem of weighing costs against potential opportunities for growth:  The more the group explored these emerging themes of workload, cost, and scale, the more the guests offered insights based on their firsthand experiences as executives at major tech companies. Tornow used an anecdote about launching the game Farmville at Zynga to illustrate the unique challenges of launching products on the cloud:  During the audience Q&A, a question about TCO analysis gave Hoyer the chance to go long on his relevant experiences at SADA:  As soon as the conversation began to wind down, Ward put the guests on the spot again, to ask Tornow and Govande point-blank whether either of them would consider repatriation an option for their company that very day. Unsurprisingly, neither said they would:  By the time Ward handed the microphone back to Dale Rossi of Google Cloud, who introduced and concluded the event, the conversation had lasted well over an hour, leaving very few angles on the subject of repatriation unexamined. Many hosts might have felt satisfied letting an event come to an end at this point, but not Ward. To leave the guests, and the audience, with a sense of urgency and resolve, he treated everyone on the call to a rendition of “Reveille,” the traditional military call to arms, arranged exclusively for this group for solo Tuba:  Repatriation may not be a realistic option for many if not most businesses, but discussing the possibility hypothetically illuminates the considerations these same businesses will have to confront as they approach cloud strategy and workload balance. “Nobody on our panel had heard of anyone born in the cloud ever going ‘back’ to the data center,” Ward said in an email reflecting on the event. “Any infrastructure cost analysis is a ‘complex calculus,’ and there's no easy button.” For Ward, there is one way to make this complex calculus manageable: “To get maximum value from cloud, focus in on the differentiated managed services that allow you to refocus staff time on innovation.”When you hear the word “repatriation,” what comes to mind for you? What does it imply for your organization and the workloads your organization manages? Are there any relevant considerations you consider crucial that you want to talk through in more depth? Join the C2C Community and start the conversation! Extra Credit:  

Categories:InfrastructureGoogle Cloud StrategyIndustry SolutionsGoogle Cloud PartnersMedia, Entertainment, and GamingRetailSession Recording

How to Choose a Virtual Machine (VM) From Google Cloud

Google Cloud provides virtual machines (VMs) to suit any workload, be it low cost, memory-intensive, or data-intensive, and any operating system, including multiple flavors of Linux and Windows Server. You can even couple two or more of these VMs for fast and consistent performance. VMs are also cost-efficient: pricier VMs come with 30% discounts for sustained use and discounts of over 60% for three-year commitments.Google’s VMs can be grouped into five categories. Scale-out workloads (T2D) If you’re managing or supporting a scale-out workload––for example, if you’re working with web servers, containerized microservices, media transcoding, or large scale java applications––you’ll find Google’s T2D ideal for your purposes. It’s cheaper and more powerful than general-purpose VMs from leading cloud vendors. It also comes with full Intel x86 CPU compatibility, so you don’t need to port your applications to a new processor architecture. T2D VMs have up to 60 CPUs, 4 GB of memory, and up to 32 Gbps networking.Couple T2D VMs with Google Kubernetes Engine (GKE) for optimized price performance for your containerized workloads. General purpose workloads (E2, N2, N2D, N1) Looking for a VM for general computing scenarios such as databases, development and testing environments, web applications, and mobile gaming? Google’s E2, N2, N2D, and N1 machines offer a balance of price and performance. Each supports up to 224 CPUs and 896 GB of memory. Differences between VM types E2 VMs specialize in small to medium databases, microservices, virtual desktops, and development environments. E2s are also the cheapest general-purpose VM. N2, N2D, and N1 VMs are better equipped for medium to large databases, media-streaming, and cache computing.  Limitations These VMs don’t come with discounts for sustained use. These VMs don’t support GPUs, local solid-state drives (SSDs), sole-tenant nodes, or nested virtualization. Ultra-high memory VMs (M2, M1) Memory-optimized VMs are ideal for memory-intensive workloads, offering more memory per core than other VM types, with up to 12 TB of memory. They’re ideal for applications that have higher memory demands, such as in-memory data analytics workloads or large in-memory databases such as SAP HANA. Both models are also perfect for in-memory databases and analytics, business warehousing (BW) workloads, genomics analysis, and SQL analysis services. Differences between VM types: M1 works best with medium in-memory databases, such as Microsoft SQL Server. M2 works best with large in-memory databases.  Limitations These memory-optimized VMs are only available in specific regions and zones on certain CPU processors. You can’t use regional persistent disks with memory-optimized machine types. Memory-optimized VMs don’t support graphic processing units (GPUs). M2 VMs don’t come with the same 60-91% discount of Google’s preemptible VMs (PVMs). (These PVMs last no longer than 24 hours, can be stopped abruptly, and may sometimes not be available at all.)  Compute-intensive workloads (C2) When you’re into high-performance computing (HPC) and want maximum scale and speed, such as for gaming, ad-serving, media transcribing, AI/ML workloads, or analyzing extensive Big Data, you’ll want Google Cloud’s flexible and scalable compute-optimized virtual machine (C2 VMs). This VM offers up to 3.8 GHz sustained all-core turbo clock speed, which is the highest consistent performance per core for real-time performance. Limitations You can’t use regional persistent disks with C2s. C2s have different disk limits than general-purpose and memory-optimized VMs. C2s are only available in select zones and regions on specific CPU processors. C2s don’t support GPUs.  Demanding applications and workloads (A2) The accelerator-optimized (A2) VMs are designed for your most demanding workloads, such as machine learning and high-performance computing. They’re the best option for workloads that require GPUs and are perfect for solving large problems in science, engineering, or business. A2 VMs range from 12 to 96 CPUs, offering you up to 1360 GB of memory. Each A2 has its own amount of GPU attached. You can add up to 257 TB of local storage for applications that need higher storage performance. Limitations You can’t use regional persistent disks with A2 VMs. V2s are only available in certain regions and zones. V2s are only available on the Cascade Lake platform.  So: which VM should I choose for my project? Any of the above VMs could be the right choice for you. To determine which would best suit your needs, take the following considerations into account: Your workload: What are your CPU, memory, porting and networking needs? Can you be flexible, or do you need a VM that fits your architecture? For example, if you use Intel AVX-512 and need to run on CPUs that have this capability, are you limited to VMs that fit this hardware, or can you be more flexible? Price/performance: Is your workload memory-intensive, and do you need high performance computing for maximum scale and speed? Does your business/project deal with data-intensive workloads? In each of these cases, you’ll have to pay more. Otherwise, go for the cheaper “general purpose” VMs. Deployment planning: What are your quota and capacity requirements? Where are you located? (remember - some VMs are unavailable in certain regions). Do you work with VMs? Are you looking for a VM to support a current project? Which VMs do you use, or which would you consider? Drop us a line and let us know! Extra Credit: 

Categories:InfrastructureComputeDevOps and SRE

Cloud Migration Cost Analysis: Determining the Value of Your Cloud Strategy

When organizations need to pivot to a different process or adopt different tools to enable more productivity, they can tend to leap into that new system without first conducting up-front research to determine its feasibility. This method of adoption is possible, but it can cause many decision-makers to pivot again after a few months once unforeseeable costs come to the forefront.Take cloud adoption and virtualization, for example. In the early 2000s, companies like Google, Amazon, and Salesforce introduced web-based services to manage digital workloads and make computing more efficient. Quickly, companies adopted multi-cloud or hybrid cloud solutions to manage their businesses and protect their employees’ and clients' information.Now the workforce is going through another revolution. Working from home is more common, many aspects of our day-to-day lives are digital, and companies have a greater need for the level of security and compliance that only private cloud infrastructures can offer. Why, then, has there been such a shift in recent years toward cloud repatriation? Read on to find out more about measuring cloud computing costs and building a cloud computing infrastructure that enables your team to work more efficiently. Measuring Cloud Computing Costs Has Caused Many CIOs to Reconsider Their Cloud Solution Early adopters have the benefit of being at the forefront of the latest technology and innovation. However, being an early adopter comes with its risks, and many CIOs and decision-makers who quickly merged their company’s processes and assets with the cloud are starting to measure their cloud computing costs and choosing to repatriate.When cloud computing is costly, misuse is often to blame. Used incorrectly, cloud computing can seem to cost more, but planning the provision process and accurately configuring assets can correct this miscalculation. Most cloud providers deliver reports and suggestions to help administrators reduce costs.Every major cloud provider uses calculators to estimate costs. Even after provisioning, watch your cloud usage and review configurations. Most cloud configurations can be adjusted to lower budgets and scale resources back. What is TCO in Cloud Computing? One of the first steps of building a cloud computing infrastructure is calculating the foreseeable costs of the move. To do so, decision-makers can use total cost of ownership (TCO) as a helpful metric to compare the cost of their current infrastructure to prospective costs of going hybrid or multi-cloud.But what is TCO in cloud computing? And is it a useful tool for weighing the cost-effectiveness of application modernization? Total cost of ownership refers to the total associated costs of an asset. This includes purchase price, adaptation, and operation. In cloud computing, specifically, TCO refers to all of the associated costs of purchasing and operating a cloud technology.Several factors make up TCO, including administration, capacity, consulting fees, infrastructure software, and integration. To properly calculate TCO, administrators must create a plan for migration and factor in the costs of maintaining the environment after the business relies on cloud resources. Conducting a Cloud TCO Analysis & Determining ROI Another important metric in cloud migration cost analysis is ROI, or return on investment. Many stakeholders and decision-makers may be familiar with ROI as a business term, but less familiar with the term in the context of cloud computing.TCO measures ROI. After the initial investment, the cost savings should be greater than the costs of running the environment every month. Cost savings will be higher than the initial investment if the company runs with a lower budget than it did using on-premise resources.An organization’s ROI is impacted by more than just the cost of infrastructure. It’s also impacted by performance, availability, scalability, and the human resources necessary to maintain it. For example, the costs of running cloud resources every month could be cheaper than on-premise costs, but slow systems reduce productivity and could cost more in constant bug fixing and troubleshooting. Measuring the Risks of Cloud Repatriation After conducting a TCO analysis on your cloud solution, you may realize that there’s room for improvement, or savings, in your cloud strategy. But repatriation, or shifting from a public cloud model to an on-premise private server, comes with its own host of risks and potential migration costs that CIOs and company leaders will need to assess in determining when to shift and when to stay. Repatriation CostsRepatriation is the process of “reverse migration,” which means bringing data and applications back in-house. The costs of repatriation add strain to an IT budget, so migration back to on-premises infrastructure must be planned. Costs include the bandwidth required to migrate data and applications, the hardware necessary to support users and services, security tools, the personnel needed to support and maintain the resources, and any downtime costs. Administrators usually avoid repatriation unless it’s necessary due to the costs, training, and downtime associated with migration. Security & Compliance RisksOne of the most popular reasons for building a cloud computing infrastructure on public platforms is security assurance and compliance. However, this solution may not continue to be feasible for smaller organizations as the cost of cloud services continues to rise. If cloud resources are not configured properly, data breaches can occur. Small organizations with few security resources may find that the risks associated with migration, including compliance regulations surrounding cloud-hosted data, outweigh the associated savings. Consider Your Previous Cloud Migration StrategyYour original cloud migration strategy will play a big role in determining the feasibility of repatriation. For instance, if your team migrated by replatforming, it may be too expensive or time consuming to move back on-prem. Conversely, if your organization took a more “lift-and-shift” approach, there may be an opportunity for you to shift back, if doing so won’t compromise security and compliance.It’s not uncommon for organizations to try cloud migration with limited sample data and applications, and then later move more critical applications. The previous plan and migration process should be analyzed, and lessons learned should be carried into the next migration. This next migration should be smoother, with less downtime. With a test migration, a large overhaul of your system migrated to the cloud could potentially cost less.Do any of these concerns resonate with you? Are you thinking about moving your workloads off the cloud? Come to our Deep Dive on cloud repatriation on January 20, 2022: 

Categories:InfrastructureGoogle Cloud Strategy

C2C Deep Dive Series: Containerization Strategies for Modernizing with Kubernetes

Organizations with all kinds of storage and hosting needs are adopting cloud infrastructure as a preferred solution. For these organizations, this means lower costs, faster speeds, and enhanced performance in general. What does this mean for the teams managing this infrastructure? In many cases, it means adapting to new strategies and a new environment. Among the most popular of these strategies right now is containerization, and among the most popular of these environments is Kubernetes.Mattias Gees is a solutions architect at Jetstack, a cloud-native services provider building enterprise platforms using Kubernetes and OpenShift. With Kubernetes and Containerization gathering momentum as a topic of interest among our community members and contributors in recent months, we wanted to invite Gees to share what he has learned as a containerization specialist using Kubernetes on a daily basis. Gees and other representatives of Jetstack, a C2C Platinum partner, were excited to join us for a Deep Dive on the topic to share some of these strategies directly with the C2C community.Gees started his presentation with some background on Jetstack, and then offered a detailed primer on Kubernetes and its capabilities for containerizing on the cloud. This introduction provided context for Gees to introduce and explain a series of containerization strategies, starting with load balancing on Google Kubernetes Engine:Another strategic solution Gees pointed out was one that has also been a frequent topic of discussion within our community, Continuous Delivery and Continuous Deployment (CD):Kubernetes is a complex and dynamic environment, and different cloud engineers and architects will use it in different ways. To give a sampling of the different potential strategies Kubernetes makes available, Gees listed some advanced Kubernetes features, including health checks, storage of config and secrets in Kubernetes objects, basic autoscaling, and advanced placement of containers:https://vimeo.com/645382667The most impressive segment of Gees’ presentation was his overview of the Kubernetes platform, including a screenshot of his own cloud-native landscape:Gees concluded the presentation with a breakdown of the different team roles associated with Kubernetes modernization, stressing that implementing the many containerization strategies he identified is not the work of one person, but many working in concert toward common goals:Are you an architect working with Kubernetes in a cloud-native environment? Do you prefer any of these containerization strategies? Can you think of any you’d like to add? Reach out to us in the community and let us know! Extra Credit:  

Categories:InfrastructureContainers and KubernetesSession Recording

C2C Navigators Series: NextGen Healthcare’s SAP on Google Cloud Migration

With migration to the cloud continuing across the public and private sectors at an accelerating rate, stories of successful migration projects are becoming especially timely and valuable. Organizations considering migration want to hear from organizations that have executed the process successfully. As these stories emerge with increasing frequency, sharing them within and among communities like C2C becomes not only natural but necessary.As we initially reported this October, NextGen Healthcare recently partnered with Managecore to simultaneously migrate their SAP applications from a private to a public cloud infrastructure and upgrade to the SAP HANA database. This was an ambitious migration project, and given the regulations around NextGen’s personally identifiable data, failure was not an option. Despite these unique considerations, the team completed the project in under six months. On October 28, 2021, C2C’s David Wascom connected with Karen Bollinger of NextGen Healthcare and Frank Powell of Managecore for a virtual C2C Navigator event exploring the background and the details of this successful project.The conversation began the way a migration process itself begins: the team established customer goals. When Wascom asked what customers typically want from a migration, Powell offered three main goals common to organizations considering migration: greater stability, lower fees and personnel costs, and “time to innovate and do new things for their organization.”After wrapping up this high-level overview, Wascom asked Bollinger and Powell for a more detailed description of the migration process. Bollinger outlined the main phases of the migration period, from moving the infrastructure from cloud to cloud, to updating the landscape to the latest service pack, to moving everything into the HANA database. Powell stressed the importance of the preliminary phase of the migration, including testing and defining SAP strategy.The discussion became most lively when Wascom asked Powell and Bollinger about their data security strategy. As a healthcare provider, NextGen is beholden to HIPAA and attendant ethical and legal considerations concerning data security. “Security is on everyone’s mind, even on-prem,” said Powell. Bollinger was equally unequivocal, if not more so. “I have no choice,” she said. “I’m in healthcare.”What does it take to migrate a massive quantity of sensitive data successfully and securely? According to Bollinger, it takes a trusted partner. “What I was looking for was a partner,” she said. “A third-party partner that we could have these conversations with.” The sentiment resonated with Wascom, who added, “The fact that you were able to work towards a common goal is a hugely powerful story.” Powell agreed wholeheartedly. For him, partnership is not just a goal, it’s a requirement. “As a service provider, our goals have to align with our customers,” he said. “If they don’t, then right from the get-go, we have failed.”When Wascom asked Bollinger and Powell for final reflections and advice for other executives considering migrating their own organizations, both responded positively and succinctly. The biggest takeaway for Bollinger? “It can be done.” Powell was similarly encouraging. “Talk to someone who’s been successful at it,” he said. “Use those as your reference points.” The reason for this, in his words, was just as simple: “We’re dealing with some pretty amazing technology.”C2C brings people like Bollinger and Powell together to demonstrate the potential of cloud technology for organizations seeking solutions and success. How is your organization hosting its software and data? Have you considered a migration to the cloud, or to a different cloud infrastructure? Would you like to hear from other organizations where similar projects have been successful? Reach out and let us know what you’re thinking, and we’ll incorporate your thoughts as we plan future discussions and events. Extra Credit:  

Categories:InfrastructureIndustry SolutionsCloud MigrationSAPHealthcare and Life SciencesSession Recording

Get to Know the Google Cloud Architect Certification

Personal development and professional development are among the hottest topics within our community. At C2C, we’re passionate about helping Google Cloud users grow in their careers. This article is part of a larger collection of Google Cloud certification path resources.The Google Cloud Professional Cloud Architect is a key player on any team that wants to activate the full benefits of Google Cloud within its organization. According to Google, “this individual designs, develops, and manages robust, secure, scalable, highly available, and dynamic solutions to drive business objectives.” Candidates need to have proficient knowledge of cloud strategy, solution design, and architecture best practices before taking this exam.The Cloud Architect debuted in 2017 and quickly became the leading competitive advantage certification that cloud job-seekers can hold; for three years in a row, Global Knowledge has placed the Google Professional Cloud Architect at or near the top of its 15 top-paying IT certifications. The salary from holding this certification doesn’t exist in a bubble, however. Global Knowledge’s report includes additional analysis on its respondents, including average number of additional certifications, average age of the certification-holder, and popular cross-certifications (some of which also place high on the list). That said, we already know from the Associate Cloud Engineer overview that any Google Cloud certification is a substantial value boost in the job market.Now, for anyone who wants to break into that market, let’s get the basics out of the way. These certifications are well-compensated for a reason, so make some time to prepare and answer the following questions before sitting for this challenging exam:What experience should I have before taking this exam? What roles and job titles does Google Cloud Professional Cloud Architect certification best prepare me for? Which topics do I need to brush up on before taking the exam? Where can I find resources and study guides for Google Cloud Professional Cloud Architect certification? Where can I connect with fellow community members to get my questions answered?View image as a full-scale PDF here.  Looking for information about a different Google Cloud certification? Check out the directory in the Google Cloud Certifications Overview. Extra CreditGoogle Cloud’s certification page: Professional Cloud Architect Example questions Exam guide Coursera: Preparing for Google Cloud Certification: Cloud Architect Professional Certification Pluralsight: Google Cloud Certified Professional Cloud Architect AwesomeGCP Professional Cloud Architect Playlist Global Knowledge IT Skills and Salary Report 2020 Global Knowledge 2021 Top-Paying IT CertificationsHave more questions? We’re sure you do! Career growth is a hot topic within our community and we have quite a few members who meet regularly in our C2C Connect: Certifications chat. Sign up below to stay in the loop. 

Categories:InfrastructureCareers in CloudGoogle Cloud CertificationsInfographic

Multi-Cloud Vs. Hybrid Cloud: When Should Businesses Make the Switch to a Hybrid Strategy?

In 2019, the public cloud services market reached $233.4 billion in revenue. This already impressive number is made even more impressive by the fact that this was a 26% year-over-year increase from the previous year; a strong indication that app modernization and cloud migration continue to be winning strategies for many enterprises.But which cloud strategy should a decision-maker choose? When should they migrate their legacy applications into a hybrid, multi-cloud, or on-premise architecture? There may not be single definitive answers to these questions, but there are certainly different options to weigh and considerations to make before officially adopting a new process. Read on to find out more about multi-cloud vs hybrid cloud strategies for startups, and join the conversation with other cloud computing experts in the C2C Community. What is a Hybrid Cloud Strategy? A hybrid cloud strategy is an internal organization method for businesses and enterprises that integrates public and private cloud services with on-premise cloud infrastructures to create a single, distributed computing environment.The cloud provides businesses with resources that would otherwise be too expensive to deploy and maintain in house. With on-premise infrastructure, the organization must have the real estate to house equipment, install it, and then hire staff to maintain it. As equipment ages, it must be replaced. This whole process can be extremely expensive, but the cloud gives administrators the ability to deploy the same resources at a fraction of the cost. Deploying cloud resources takes minutes, as opposed to the potential months required to build out new technology in house. In a hybrid cloud, administrators deploy infrastructure that works as an extension of their on-premise infrastructure, so it can be implemented in a way that ties into current authentication and authorization tools. What is a Multi-Cloud Strategy? Conversely, a multi-cloud strategy is a cloud management strategy that requires enterprises to treat their cloud services as separate entities. A multi-cloud strategy will include more than one public cloud service and does not need to include private services, like in the case of hybrid cloud. Organizations use a multi-cloud strategy for several reasons, but the primary reasons are to provide failover and avoid vendor lock-in. Should one cloud service fail, a secondary failover service can take over until the original service is remediated. It’s an expensive solution, but it’s a strategy to reduce downtime during a catastrophic event. Most cloud providers have similar products, but administrators have preferences and might like one over another. By using multiple cloud services, an organization isn’t tied to only one product. Administrators can pick and choose from multiple services and implement those that work best for their organizations’ business needs. What is the Difference Between a Hybrid and Multi-Cloud Strategy? Though the differences might be slight, choosing the wrong cloud strategy can impact businesses in a big way, especially those just starting out. One of the primary differences between a hybrid and a multi-cloud strategy is that a hybrid cloud is managed as one singular entity while a multi-cloud infrastructure is not. This is largely due to the fact that multi-cloud strategies often include more than one public service that performs its own function.Additionally, when comparing multi-cloud vs. hybrid cloud, it’s important to note that a hybrid cloud will always include a private cloud infrastructure. Now, a multi-cloud strategy can also include a private cloud service, but if the computing system is not managed as a single entity, it is technically considered both a multi-cloud and a hybrid cloud strategy.Infrastructure is designed differently, but the biggest significance is cost. Hosting multi-cloud services costs more than using one service in a hybrid solution. It also requires more resources to support a multi-cloud environment, because it’s difficult to create an environment where services from separate providers will integrate smoothly with each other, and requires additional training for any staff unfamiliar with cloud infrastructure. Which Cloud Strategy Has the Most Business Benefits? Every cloud strategy has its benefits, and most organizations leverage at least one provider to implement technology that would otherwise be too costly to host in-house. For a simple hybrid solution, use a cloud service that provides a majority of the resources needed. All cloud services scale, but you should find one that has the technology that you need to incorporate into workflows.Multi-cloud is more difficult to manage, but it gives administrators better freedom to pick and choose their favorite resource without relying on only one provider. A multi-cloud strategy also provides failover should a single provider fail, so it eliminates the single point of failure that most hybrid solutions experience. A cloud provider has minimal downtime, but downtime occasionally happens. With a multi-cloud strategy, administrators can keep most business workflows working normally until the primary provider recovers.It’s hard to stand squarely on the side of one cloud strategy over another. Every business has its own unique variables and dependencies that may make a hybrid model more desirable than multi-cloud, or vice versa. The benefits of an on-premise cloud infrastructure may also outweigh those of both hybrid and multi-cloud. The decision to go hybrid or adopt a multi-cloud strategy resides with the decision-makers of said enterprise. There are, however, some considerations businesses of any size and lifecycle can take into account before finalizing the decision. What to Consider When Switching to a Hybrid Cloud Strategy Before choosing a provider, you should research each provider’s services, feedback, and cost. It’s not easy to choose a provider, but the one integrated into the environment should have all the tools necessary to enhance workflows and add technology to the environment. A few key items that should be included are: Authorization and authentication tools Speed and performance metrics Backups and failover within data centers Different data center zones for internal failover Logging and monitoring capabilities Usage reports Convenient provisioning and configuration Most cloud providers have a way to demo their services, or they give users a trial period to test products. Use this trial wisely so that administrators can determine the best solution for the corporate environment. Multi-Cloud Vs. Hybrid Cloud for StartupsAgain, deciding between a multi-cloud strategy vs. hybrid cloud strategy depends on the needs of the company. For startups, there may need to be a greater emphasis on security and disaster recovery, in which case a multi-cloud management strategy would provide a company at the beginning of its lifecycle the protection it needs to grow.Conversely, to bring up one of the key differences between a hybrid cloud and multi-cloud strategy, if an entity uses private cloud services, a hybrid cloud model would provide the startup with the flexibility it needs to make changes to their computing infrastructure as they become more established. Do Startups Benefit From an On-Premise Cloud Infrastructure?The short answer is yes, startups can benefit from an on-premise cloud infrastructure. Taking any services in-house, whether it's managing payroll or IT services, can help reduce costs and give businesses more visibility into their workflow. If there is a need to hold on to an on-premise cloud infrastructure, a multi-cloud strategy will allow that enterprise to maintain that computing system while also managing additional public cloud services separately. What Does the Resurgence of IT Hardware Mean for Cloud? Even though cloud adoption has been surging for some time among businesses (Gartner reported in 2019 that more than a third of organizations view cloud investments as a “top 3 investing priority”) IT hardware and in-house services have also experienced a resurgence in popularity. Many believe this new phenomenon, referred to as cloud repatriation by those in the IaaS (Infrastructure as a Service) industry, is the result of a lack of understanding around proper cloud management and containerization among IT decision-makers. They may initially make the choice to migrate certain applications into a hybrid cloud strategy only to abandon the effort because of workload portability. In light of this shift, hyphen-cloud strategies, like multi-cloud vs. hybrid cloud, still reign supreme as a cost effective and secure way to manage legacy applications and workloads. It may take a fair amount of planning and strategizing to decide which cloud strategy matches the company lifecycle to which it applies, but cloud adoption certainly isn’t going anywhere any time soon.

Categories:InfrastructureGoogle Cloud StrategyHybrid and MulticloudCloud Migration

C2C Talks: Google Cloud Security Demystified Key Conversations

Cloud security is an emerging technology, and even some of the most seasoned professionals in the cloud community are still learning how it works, or at least thinking about it. If all of your data is stored on the cloud, and all of your apps are running on it, you want to know that those apps and that data are secure, and knowing that the cloud is an open, shared environment might not be an immediate comfort. Luckily, the cloud offers all kinds of security resources you can’t access anywhere else. Understanding how these resources can protect your data and assets is crucial to doing the best work possible in a cloud environment. Vijeta Pai is a C2C contributor and Google Cloud expert whose website Cloud Demystified provides comics and other educational content that makes cloud security accessible and intelligible to the average Google Cloud user. C2C recently invited Pai to give a presentation and host a discussion on all things cloud security, from threat modeling to shared responsibility arrangements to best practices, drawing on her work with Cloud Demystified as well as the content she’s published on the C2C blog.  Watch her full presentation below, and read on for some of the key conversations from her C2C Talks: Cloud Security Demystified.  After providing some background on types of cloud providers (public, private, and hybrid) and the different elements of cloud security (technologies, processes, controls, and policies), Pai broke down the STRIDE threat model. This model defines every type of cybersecurity attack a cloud security system might be required to prevent. The six types are Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Watch below for Pai’s breakdown of the definitions and associated security considerations of each one:  Next, Pai explained the different possible models used to share the responsibility for security between an organization and a cloud provider. The three models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), and each allocates responsibility for people, data, applications, and the operating system (OS) differently:  Pai kicked off the open discussion portion with a comprehensive review of cloud security best practices, which referred back to a post she wrote for the C2C blog, 10 Best Practices for Cloud Security in Application Development.  As she does in the post, Pai went through these strategies one by one, from Identity and Access Management Control to Data Encryption to Firewalls. For anyone in the process of actively implementing their cloud security measures, Pai’s full answer is worth the watch:  A unique opportunity for C2C members is the ability to ask questions directly to the experts, and Pai fielded several questions about specific aspects of the technology of Google Cloud itself.  The first question came from C2C member Dickson Victor (@Vick), who was concerned with whether the cloud can support better security than an on-premise system. Pai’s answer spoke to the heart of the issue for most prospective cloud users: the policies, processes, and resources available in an open environment like the cloud versus those available in a locked, private system. Her response was nothing but encouraging:  Pai also took a moment to let C2C community member Lokesh Lakhwani (@llakhwani17) plug the Google Cloud Security Summit, the first-ever tech summit on cloud security:  The discussion wrapped up with a question about cybersecurity insurance and whether it might become an entire industry once cloud security becomes a new standard. Pai wasn’t sure how quickly the industry would explode. Still, she thinks there is room out there for growth and innovation, precisely because of the extent to which technology has become a necessary part of day-to-day life for so many people living through the COVID-19 pandemic, including Pai’s mother, who lives and works in India.  Moreover, the more we live our lives on the cloud, the more we will need cloud security, which, to Pai, means there is plenty of opportunities right now for cybersecurity insurance companies to make their mark: Do you have questions or concerns about cloud security that Pai didn’t answer in this session? Feel free to share them in the comments and also to connect with Pai directly. You can find her on LinkedIn or join C2C to keep up with her work and get in touch with other tech professionals working in the cloud security field.

Categories:InfrastructureGoogle Cloud StrategyIdentity and SecuritySession Recording

Scaling an Enterprise Software: Autoscaling Applications and Databases (full video)

Michael Pytel (@mpytel), co-founder and CTO at Fulfilld, shares stories from the team’s wins and losses in building out this intelligent managed warehouse solution.The recording from this Deep Dive includes:(2:20) Introduction to Fulfilld (4:10) The team’s buildout requirements for a cloud-based application, including language support, responsiveness, and data availability (9:15) Fulfilld’s Android-based scanner’s capabilities and hardware (12:25) Creating the digital twin with anchor points (14:50) Microservice architecture, service consumption, and service data store (19:35) Data store options using BigQuery, Firestore, and CloudSQL (23:35) Service runtime and runtime options using Cloud Functions (28:55) Example architecture (30:25) Challenges in deciding between Google Cloud product options (31:40) Road map for the warehouse digital assistant, document scanning, and 3D bin packing algorithm (39:00) Open community questions Community Questions AnsweredWhat does the road map include for security? Did using Cloud Functions help with the system design and partitioning codings tasks by clearly defining functions and requirements? Do you give your customers access to their allocated BigQuery instance? What type of data goes to Firestore versus CloudSQL?Other ResourcesGoogle Cloud Platform Architecture Framework Google Cloud Hands-On Labs on Coursera Google Cloud Release Notes by ProductFind the rest of the series from Fulfilld below: 

Categories:InfrastructureGoogle Cloud StrategyIndustry SolutionsDatabasesSupply Chain and LogisticsSession Recording

The Difference Between Virtual Machines (VMs) and Hypervisors

Let’s go wild here. Say you’re uncertain whether to keep your brain as is. You think a certain Harry Potter-type surgeon could create a better version. But you're not sure. You’re also afraid this surgeon will scotch up your brain while he fiddles on improvements. So you have the surgeon construct cabinets in your skull - these cabinets are on the periphery of your brain - where he does all his work. If his virtual brains are better than the brain you have now, the surgeon replaces your brain with his creations. If they’re not, the surgeon continues producing virtual brains in his cabinets until you’re satisfied with his results.Those cabinets are called virtual machines. The layer that overrides and organizes these cabinets as well as giving the surgeon more room to work in, is called the hypervisor.Virtual MachinesIn the computer world, we have the hardware which is the equivalent of your body, and the software, the equivalent of your brain, that drives the body. Now, say you want to improve some existing software but are afraid that tinkering on it could irreversibly destroy the original system. Computer engineers solved that problem by building one or more virtual machines—or virtual cabinets—(like mini labs) where they tinker on their prototypes, called instances, while the original stays intact.HypervisorAt one time, this software tool was called the “supervisor”. It’s the additional digital layer that connects each of your virtual machines (VMs), supervises the work being done in the VMs, and separates each VM from the other. In this way, your instances are organized and your VMs are rendered coffin-tight to outside interference, protecting your instances, or innovations. You’ve got two types of hypervisors: Those that sprint side-by-side the VMs and those that shimmy on top. In either case, the hypervisor serves as an “alley” for storing additional work information.Amazon’s Nitro HypervisorNine years ago, Amazon Web Services (AWS) noticed that very soon software developers would have a problem. The hypervisor system was wasteful; they consumed too much RAM, they yielded inconsistent results, and their security would be challenged with the accelerating bombarding software.“What we decided to do,” Anthony Liguori, principal software engineer of Amazon and one of the key people who planned and executed the venture told me, “was to completely rethink and reimagine the way things were traditionally done.”The VMs and hypervisors are software. So, too, all the elements—input/output (I/O) functionalities—are integral to these systems. What AWS did was tweeze out each of these I/Os bit by bit and integrate them into their dedicated hardware Nitro architecture, using a novel silicon produced by Israeli startup Annapurna Labs.Today, all AWS virtualization happens in hardware instead of software, shaving management software costs and reducing jitter to microseconds.Since 2017, more companies have emulated AWS and likewise migrated most of their virtualization functionalities to dedicated hardware, in some cases rendering the hypervisor unnecessary. This means all virtualization could now be done from their hardware tech stack without need of a hypervisor.Bottom LineVirtual machines are for deploying virtualization models, where you can build and rebuild instances at your pleasure while protecting the original OS. The hypervisor operates and organizes these VMs and stores additional work information. In the last few years, AWS developed its revolutionary Nitro virtualization system, where software VMs and hypervisors were transmogrified into dedicated hardware form. In this way, working on instances becomes cheaper, faster and more secure. Innovations also unfurl faster, since both VM and hypervisor layers are eliminated. More vendors, like VMWare, Microsoft and Citrix, emulated Amazon and introduced their own so-called bare metal hypervisors, too. These hypervisors are called Type 1.Meanwhile, Google Cloud uses the security-hardened Kernel-based Virtual Machine (KVM) hypervisor. KVM is an open source virtualization technology built into Linux, basically turning Linux into a system with both hypervisor and virtual machines (VMs). Although it’s Type 2 (since it runs on top of an OS), it has all the capacities of a Type 1.Let’s Connect!Leah Zitter, PhD, has a Masters in Philosophy, Epistemology and Logic and a PhD in Research Psychology.

Categories:InfrastructureCompute

What Is Google Anthos and How Is It Modernizing Cloud Infrastructure?

Many of the applications we interact with every day are supported by hybrid and multi-cloud infrastructure. And while this modernization creates many benefits in security, scalability, and continuous testing processes, effectively managing these sometimes disparate environments can be complicated. Enter: Anthos, Google’s multi-services platform launching CI and applications operations into the future of cloud computing.Continuous integration (CI) is integral to DevOps and modern hybrid cloud infrastructure. It allows developers and operations teams to orchestrate software packaged and deployed in containers. Instead of manually configuring and deploying software, Anthos empowers DevOps with tools that automate deployments, speed up delivery of applications, and give DevOps access to cloud-native services that save time and money.     What Is Anthos? Anthos is a managed application platform that was first introduced in 2019 when Google combined some of its cloud services with Google Kubernetes Engine (GKE) to create a system for unifying and modernizing application operations. Kubernetes is a Google product used to orchestrate application deployment to the cloud. Software deployed in Kubernetes is packaged into a container and any configurations and sent to the Google Cloud Platform (GCP). Using Kubernetes, DevOps can eliminate human errors and automate configurations during deployment. Automation is one of the most significant advantages of using Anthos.DevOps uses Anthos to automate deployments to cloud-native environments in containers, which is the major component in microservices technology. Microservices break down large monolithic codebases into smaller components so that they can be individually managed and updated. The advantage of deploying to the cloud using Anthos is the speed of deployments, but GCP also offers performance improvements over on-premise infrastructure. Applications run on edge servers across Google data centers, so users across the globe will see a significant performance improvement regardless of their location. Anthos Components & Strategies for App Modernization While Kubernetes workload management is still very much at the heart of the platform, many other cloud technologies come together to create Anthos’ different components and build a tech ecosystem that is revolutionizing CI/CD: GKEOne of Anthos’ main components is managing Kubernetes workloads. Kubernetes is an enterprise solution for container deployments. In large enterprise environments, constant monitoring, deployments, and recreation of containers require excessive resources. GKE will manage these resources and ensure that the environment runs smoothly and without performance degradation.  GKE On-PremiseOrganizations often work with hybrid solutions in large environments, meaning services run on-premise and in the cloud. GKE can be installed on-premise so that services can be deployed internally and in the cloud. You must have the infrastructure installed to run GKE on-premise, but GKE can manage both on-premise containers and those that run in the cloud. Istio Service MeshIn addition to supporting Kubernetes architecture, Anthos also gives developers and operators greater connectivity through a federated network. When organizations leverage microservices using containers, they decouple the large codebase into smaller units. These smaller units must “speak” to each other, and the Istio Service Mesh provides pathways to communicate these microservices. StackdriverOne of the biggest benefits of app modernization is full-stack observability and greater system health management. With Anthos, logging, tracing, and system monitoring are centralized within the platform, creating an opportunity for continuous deployment testing. GCP MarketplaceShould any organization find a pre-made package in the Google Cloud Marketplace that will help with productivity, organization administrators can easily install packages from the marketplace to their cloud. These applications are configured to run on GCP, so no configurations of virtual machines, storage, and network resources are necessary. GCP Cloud InterconnectAnother Anthos component that actively assists in app modernization is GCP Cloud Interconnect. In a hybrid environment, data from the on-premise network must sync with the cloud. Organizations must also upload data to storage devices in the cloud. GCP Cloud Interconnect provides a high-performance virtual private cloud network to transfer data between environments securely. How Can Anthos Multi-Cloud Help Modernize Application Operations?Such a large component of modern app development takes place in hybrid and public cloud environments. This calls for a need to streamline operational processes and introduce app modernization throughout the development cycle from deployment to testing and system monitoring. Here are just some of the ways Anthos is revolutionizing cloud-native ecosystems: Robust Observability & Shift Left TestingAnthos multi-cloud is building a more innovative coding environment through the ability for quicker testing, or shift-left testing, and great observability. Observability finds its origins deep within machine diagnostics, but what is the shift-left testing? This phenomenon of placing testing earlier in the development process is known as shift left testing, and it allows developers and operators to improve the quality of their code deployments.Part of app modernization is shortening the distance between different development steps. With Anthos, system logs and traces are centralized with the Stackdriver technology to place testing power in the hands of developers and operators, not just insights teams. Anthos Enables Greater FlexibilityInstead of working with a monolithic codebase with rigid platform requirements, Google Anthos and microservice technology provide greater flexibility to deploy containers across a hybrid environment. Deploy to the cloud, on-premise infrastructure, and even a developer device without extensive configuration management and time-consuming bug fixes. Building Operational ConsistencyAutomation keeps deployments consistent and reduces human error from manual code promotions to production. Because configurations and code are packaged within a container and maintained in Kubernetes, every deployment remains the same and keeps code consistent across the cloud and on-premise infrastructure. What Is the Future of Anthos Multi-Cloud & Hybrid CI/CD Environments? Automation in DevOps continually proves to be the future in faster code deployments, reduced human errors, and better consistency and performance in application execution. Continuous integration and delivery (CI/CD) can speed up deployments from several weeks to a few minutes in high-performance DevOps. As organizations realize that cloud microservices offer better performance and faster code deployments, Anthos will evolve to be a beneficial service to any enterprise with a hybrid or multi-cloud environment. Anthos allows DevOps teams to fully automate their deployments, giving your developers time to innovate and operations people more time to maintain and upgrade infrastructure. It saves time and money across all business units. It gives your organization the ability to maintain flexibility regardless of the software deployed and the platform required to run applications. Extra Credit  https://cloud.google.com/anthos https://www.cuelogic.com/blog/what-is-anthos-the-multicloud-platform-by-google https://www.techzine.eu/blogs/cloud/48197/what-is-google-anthos-is-this-the-modern-cloud-infrastructure-youre-looking-for/ https://www.infoworld.com/article/3541324/what-is-google-cloud-anthos-kubernetes-everywhere.html https://www.acronis.com/en-us/articles/top-10-benefits-of-multi-cloud/

Categories:Infrastructure