C2C Monthly Recap: June 2022
- C2C News
Browse articles, resources, and the latest product updates.
Trevor Marshall (@tmarshall) had just left the stage after over an hour of nonstop conversation, but he was ready for another interview. The CTO of Current, an aptly-named disruptor in the developing fintech space, had come to the event to participate in a panel discussion with Spenser Paul of DoiT (@spenserpaul), Michael Brzezinski of AMD (@mike.brzezinski), and Michael Beal of Data Capital Management (@MikeBeal), immediately following a one-on-one fireside chat with Paul, who also brought his labrador Milton onstage with him for both sessions. Now Marshall was sitting at a wooden dining table in an open workspace overlooking Manhattan’s Little Island floating park, enthusiastically describing a proof-of-concept his company is running with Google Cloud’s C2D compute instances, an offering powered by AMD’s EPYC processors.“It’s cool to actually be able to put a face to some of this technology,” he said. “We have a lot of compute-bound instances, and for me, I was like, ‘Oh, it’s the C2D guy!’” Brzezinski had discussed AMD’s role in bringing C2D instances to Google Cloud customers, but Marshall hadn’t known until the two were seated onstage together that his fellow panelist is directly involved in selling the same technology he hopes to adopt. “I’m going to be reaching out to that guy,” he said. “I do have some questions. That will actually unlock some progress in our stack, and I think that’s pretty sweet.” Trevor Marshall of Current, Spenser Paulof Doit, and Paul’s Labrador, MiltonMarshall’s positivity and excitement to collaborate reflected the prevailing atmosphere at C2C Connect Live, New York City, the most recent of C2C Global’s regional face-to-face events for Google Cloud customers and partners, this one hosted at Google’s 8510 building in Manhattan’s Chelsea neighborhood. The scheduled program put Marshall in conversation with Brzezinski, AMD’s Global Sales Manager, Paul, DoiT’s head of Global Alliances, and Beal, Data Capital Management’s CEO, on the topic of innovation and cost optimization on Google Cloud. These sessions were designed as a starting point for the reception that followed, where the panelists and guests shared their stories and explored the topics discussed in more depth.“You get an opportunity to say the things you feel like people are interested in, and then you get to talk with them afterward,” said Brzezinski. “They’ll come and ask you more about what you said, or say, ‘you mentioned this one thing, but I want to know more about something different.’” “You collide two atoms together, you create something new. You collide two people together and have an open discussion, you learn something new, get new insight.” Thomson Nguy (@thomson_nguy), Vice President of Sales in the Americas at Aiven, was grateful to be able to meet both Brzezinski and Beal in person, having worked with both companies, AMD as a vendor and Data Capital Management as a customer, but only remotely. “We’re an AMD customer, we’re a Google customer, but also we’ve got one of our customers [at the event] that can actually use the price performance that AMD can drive, and so it’s actually being able to connect relationships along the whole value chain,” he said. “Working together as partners, we can actually create real value for the customers.” Customer conversations outside Google’sGoblin King AuditoriumNguy particularly appreciated being able to make these connections in an informal setting, where sales was not top of mind for him or his team. When he and Beal met, before talking shop, the two reminisced about Harvard Business School, where both earned their MBAs. “This event was very natural,” said Nguy. “It wasn’t like going to an AWS summit, where you get lost in 10,000 people at the Javitz center. It’s a very intimate place that lets you connect and talk with people, and it has that really cool vibe, a community vibe that I really appreciate.” Faris Alrabi (@faris.alrabi), one of Aiven’s Sales Team Leads in the Americas, wholeheartedly agreed. At most events, he said, he feels obligated to pitch, whereas, at C2C Connect Live, he went out of his way not to.Attendees repeatedly echoed these sentiments. In conversation with Nguy in front of a spread of refreshments that depleted rapidly over the course of the reception, Geoff MacNeil of Crowdbotics, another company that brought multiple team members out to the event, attributed the unique value of this intimate setting to the possibility of chance encounters. “Collisions create innovation,” he said. “You collide two atoms together, you create something new. You collide two people together and have an open discussion, you learn something new, get new insight.” Nguy and MacNeil also exchanged information to discuss opportunities to partner in the future.New business deals aside, however, the ability to meet and share ideas and impressions in person, guests agreed, was reason enough to attend already. “Even if we left this event without getting a single lead,” said Nguy, “the experience of being here and understanding our customers and the way they think and the way they talk in a lot fuller context, I thought that was super valuable.” C2C Will be hosting many more face-to-face events in the coming months. To connect with Google Cloud customers in your area and spark more innovation for your company, register for these upcoming events below:
The centerpiece of C2C’s virtual Earth Day conference, Clean Clouds, Happy Earth, was a panel discussion on sustainability in EMEA featuring C2C and Google Cloud partners HCL and AMD and cosmetics superpower L’Oreal. Moderated by Ian Pattison, EMEA Head of Sustainability Practice at Google Cloud, the conversation lasted the better part of an hour and explored a range of strategies for enabling organizations to build and run sustainable technology on Google Cloud.According to Sanjay Singh, Executive VP of the Google Cloud Ecosystem Unit at HCL technologies, when advising customers across the value chain evaluating cloud services, Google Cloud becomes a natural choice because of its focus on sustainable goals. Connecting customers to Google Cloud is a key part of HCL’s broader program for maintaining sustainable business practices at every organizational level. “What you cannot measure, you cannot improve” says Singh, which is why HCL has created systems to measure every point of emission under their purview for carbon footprint impact. In alignment with Google Cloud’s commitment to run a carbon-free cloud platform by 2030, HCL plans to make its processes carbon neutral in the same timeframe.Suresh Andani, Senior Director of Cloud Vertical Marketing at AMD, serves on a task force focused on defining the company’s sustainability goals as an enterprise and as a vendor. As a vendor, AMD prioritizes helping customers migrate to the cloud itself as well as making its compute products (CPUS and GPUS) more energy efficient, which they plan to do by a factor of 30 by 2025. On the enterprise side, Andani says, AMD relies on partners and vendors, so making sure AMD as an organization is sustainable expands to its ecosystem of suppliers. One of the biggest challenges, he says, is to measure partners’ operations. This challenge falls to AMD’s corporate responsibility team.Health and beauty giant L’Oreal recently partnered with Google Cloud to run its beauty tech data engine. In the words of architect Antoine Castex, a C2C Team Lead in France, sustainability at L’Oreal is all about finding “the right solution for the right use case.” For Castex, this means prioritizing Software as a Service (SaaS) over Platform as a Service (PaaS), and only in the remotest cases using Infrastructure as a Service (IaaS). He is also emphatic about the importance of using serverless architecture and products like AppEngine, which only run when in use, rather than running and consuming energy 24/7.For Hervé Dumas, L’Oreal’s Sustainability IT Director, these solutions are part of what he calls “a strategic ambition,” which must be common across IT staff. Having IT staff dedicated to sustainability, he says, creates additional knowledge and enables necessary transformation of the way the company works. As Castex puts it, this transformation will come about when companies like L’Oreal are able to “change the brain of the people.”As Castex told C2C in a follow-up conversation after the event, the most encouraging takeaway from the panel for L’Oreal was the confirmation that other companies and tech players have “the same dream and ambition as us.” Watch a full recording of the conversation below, and check back to the C2C website over the next two weeks for more content produced exclusively for this community event. Also, if you’re based in EMEA and want to connect with other Google Cloud customers and partners in the C2C community, join us at one of our upcoming face-to-face events: Extra Credit:
Sustainability is an inherent value of cloud computing and storage. According to Suresh Andani, Senior Director of Cloud Vertical Marketing at C2C Global Gold partner AMD, data center sustainability, which used to be an afterthought, has now become a key requirement. The first step to a more sustainable compute solution, he says, is migration to the cloud. This gives companies like AMD an immediate advantage: they are already offering a more sustainable solution. However, along with this advantage comes a challenge. All cloud partners provide the option to migrate. How can companies like AMD help further?AMD will appear alongside a full lineup of C2C and Google Cloud customers and partners this Thursday, April 21, 2022 at Clean Clouds, Happy Earth, a special C2C Earth Day event for companies and practitioners committed to sustainable cloud solutions. Participating companies include Deutsche Bank and Nordic Choice Hotels, and full sessions will explore topics such as supply chain resiliency, food waste, environmental, social, and governance analysis, and sustainable IT. Andani will join a panel of executives featuring Sanjay Singh of C2C platinum partner HCL, Antoine Castex––a C2C Team Lead in France––and Hervé Dumas of L’Oreal, and Ian Pattison, EMEA Head of Sustainability Practice at Google. “Energy efficiency is not just about power consumed and how efficiently you address or cool. It’s also about how you make your manufacturing process more sustainable.” Andani hopes the panel will be “a channel to get the word out” about how AMD differentiates in the cloud computing space. All of AMD’s customers need to be able to reduce the amount of power they’re consuming as they process their workloads. AMD’s solutions are designed to solve this problem at the root cause. “Energy efficiency is not just about power consumed and how efficiently you address or cool,” Andani says. “It’s also about how you make your manufacturing process more sustainable.” To this end, several years ago, AMD implemented a chiplet architecture specifically designed to improve their yields and minimize waste. Now, says Andani, many of AMD’s peers are choosing to go the same route.More providers in the cloud computing space adopting a more sustainable manufacturing process is all the more reason for companies like AMD to participate in live events hosted by customer communities like C2C. As Andani was happy to share, he and Pattison have appeared together at similar events in the past. These panels, Andani affirms, are of unique value to Google Cloud customers looking to improve energy efficiency. Representatives of Google Cloud appear at such events to discuss how Google Cloud’s products use technologies such as AI and ML to monitor energy consumption. When the same panel features an end customer adopting this technology, in Andani’s words, “that completes the story.” Join C2C Global and all of our distinguished sponsors and guests at 9:00 AM EDT on April 21, 2022 to witness the complete story of sustainable computing on Google Cloud. Use the link below to register:
Google Cloud provides virtual machines (VMs) to suit any workload, be it low cost, memory-intensive, or data-intensive, and any operating system, including multiple flavors of Linux and Windows Server. You can even couple two or more of these VMs for fast and consistent performance. VMs are also cost-efficient: pricier VMs come with 30% discounts for sustained use and discounts of over 60% for three-year commitments.Google’s VMs can be grouped into five categories. Scale-out workloads (T2D) If you’re managing or supporting a scale-out workload––for example, if you’re working with web servers, containerized microservices, media transcoding, or large scale java applications––you’ll find Google’s T2D ideal for your purposes. It’s cheaper and more powerful than general-purpose VMs from leading cloud vendors. It also comes with full Intel x86 CPU compatibility, so you don’t need to port your applications to a new processor architecture. T2D VMs have up to 60 CPUs, 4 GB of memory, and up to 32 Gbps networking.Couple T2D VMs with Google Kubernetes Engine (GKE) for optimized price performance for your containerized workloads. General purpose workloads (E2, N2, N2D, N1) Looking for a VM for general computing scenarios such as databases, development and testing environments, web applications, and mobile gaming? Google’s E2, N2, N2D, and N1 machines offer a balance of price and performance. Each supports up to 224 CPUs and 896 GB of memory. Differences between VM types E2 VMs specialize in small to medium databases, microservices, virtual desktops, and development environments. E2s are also the cheapest general-purpose VM. N2, N2D, and N1 VMs are better equipped for medium to large databases, media-streaming, and cache computing. Limitations These VMs don’t come with discounts for sustained use. These VMs don’t support GPUs, local solid-state drives (SSDs), sole-tenant nodes, or nested virtualization. Ultra-high memory VMs (M2, M1) Memory-optimized VMs are ideal for memory-intensive workloads, offering more memory per core than other VM types, with up to 12 TB of memory. They’re ideal for applications that have higher memory demands, such as in-memory data analytics workloads or large in-memory databases such as SAP HANA. Both models are also perfect for in-memory databases and analytics, business warehousing (BW) workloads, genomics analysis, and SQL analysis services. Differences between VM types: M1 works best with medium in-memory databases, such as Microsoft SQL Server. M2 works best with large in-memory databases. Limitations These memory-optimized VMs are only available in specific regions and zones on certain CPU processors. You can’t use regional persistent disks with memory-optimized machine types. Memory-optimized VMs don’t support graphic processing units (GPUs). M2 VMs don’t come with the same 60-91% discount of Google’s preemptible VMs (PVMs). (These PVMs last no longer than 24 hours, can be stopped abruptly, and may sometimes not be available at all.) Compute-intensive workloads (C2) When you’re into high-performance computing (HPC) and want maximum scale and speed, such as for gaming, ad-serving, media transcribing, AI/ML workloads, or analyzing extensive Big Data, you’ll want Google Cloud’s flexible and scalable compute-optimized virtual machine (C2 VMs). This VM offers up to 3.8 GHz sustained all-core turbo clock speed, which is the highest consistent performance per core for real-time performance. Limitations You can’t use regional persistent disks with C2s. C2s have different disk limits than general-purpose and memory-optimized VMs. C2s are only available in select zones and regions on specific CPU processors. C2s don’t support GPUs. Demanding applications and workloads (A2) The accelerator-optimized (A2) VMs are designed for your most demanding workloads, such as machine learning and high-performance computing. They’re the best option for workloads that require GPUs and are perfect for solving large problems in science, engineering, or business. A2 VMs range from 12 to 96 CPUs, offering you up to 1360 GB of memory. Each A2 has its own amount of GPU attached. You can add up to 257 TB of local storage for applications that need higher storage performance. Limitations You can’t use regional persistent disks with A2 VMs. V2s are only available in certain regions and zones. V2s are only available on the Cascade Lake platform. So: which VM should I choose for my project? Any of the above VMs could be the right choice for you. To determine which would best suit your needs, take the following considerations into account: Your workload: What are your CPU, memory, porting and networking needs? Can you be flexible, or do you need a VM that fits your architecture? For example, if you use Intel AVX-512 and need to run on CPUs that have this capability, are you limited to VMs that fit this hardware, or can you be more flexible? Price/performance: Is your workload memory-intensive, and do you need high performance computing for maximum scale and speed? Does your business/project deal with data-intensive workloads? In each of these cases, you’ll have to pay more. Otherwise, go for the cheaper “general purpose” VMs. Deployment planning: What are your quota and capacity requirements? Where are you located? (remember - some VMs are unavailable in certain regions). Do you work with VMs? Are you looking for a VM to support a current project? Which VMs do you use, or which would you consider? Drop us a line and let us know! Extra Credit:
Let’s go wild here. Say you’re uncertain whether to keep your brain as is. You think a certain Harry Potter-type surgeon could create a better version. But you're not sure. You’re also afraid this surgeon will scotch up your brain while he fiddles on improvements. So you have the surgeon construct cabinets in your skull - these cabinets are on the periphery of your brain - where he does all his work. If his virtual brains are better than the brain you have now, the surgeon replaces your brain with his creations. If they’re not, the surgeon continues producing virtual brains in his cabinets until you’re satisfied with his results.Those cabinets are called virtual machines. The layer that overrides and organizes these cabinets as well as giving the surgeon more room to work in, is called the hypervisor.Virtual MachinesIn the computer world, we have the hardware which is the equivalent of your body, and the software, the equivalent of your brain, that drives the body. Now, say you want to improve some existing software but are afraid that tinkering on it could irreversibly destroy the original system. Computer engineers solved that problem by building one or more virtual machines—or virtual cabinets—(like mini labs) where they tinker on their prototypes, called instances, while the original stays intact.HypervisorAt one time, this software tool was called the “supervisor”. It’s the additional digital layer that connects each of your virtual machines (VMs), supervises the work being done in the VMs, and separates each VM from the other. In this way, your instances are organized and your VMs are rendered coffin-tight to outside interference, protecting your instances, or innovations. You’ve got two types of hypervisors: Those that sprint side-by-side the VMs and those that shimmy on top. In either case, the hypervisor serves as an “alley” for storing additional work information.Amazon’s Nitro HypervisorNine years ago, Amazon Web Services (AWS) noticed that very soon software developers would have a problem. The hypervisor system was wasteful; they consumed too much RAM, they yielded inconsistent results, and their security would be challenged with the accelerating bombarding software.“What we decided to do,” Anthony Liguori, principal software engineer of Amazon and one of the key people who planned and executed the venture told me, “was to completely rethink and reimagine the way things were traditionally done.”The VMs and hypervisors are software. So, too, all the elements—input/output (I/O) functionalities—are integral to these systems. What AWS did was tweeze out each of these I/Os bit by bit and integrate them into their dedicated hardware Nitro architecture, using a novel silicon produced by Israeli startup Annapurna Labs.Today, all AWS virtualization happens in hardware instead of software, shaving management software costs and reducing jitter to microseconds.Since 2017, more companies have emulated AWS and likewise migrated most of their virtualization functionalities to dedicated hardware, in some cases rendering the hypervisor unnecessary. This means all virtualization could now be done from their hardware tech stack without need of a hypervisor.Bottom LineVirtual machines are for deploying virtualization models, where you can build and rebuild instances at your pleasure while protecting the original OS. The hypervisor operates and organizes these VMs and stores additional work information. In the last few years, AWS developed its revolutionary Nitro virtualization system, where software VMs and hypervisors were transmogrified into dedicated hardware form. In this way, working on instances becomes cheaper, faster and more secure. Innovations also unfurl faster, since both VM and hypervisor layers are eliminated. More vendors, like VMWare, Microsoft and Citrix, emulated Amazon and introduced their own so-called bare metal hypervisors, too. These hypervisors are called Type 1.Meanwhile, Google Cloud uses the security-hardened Kernel-based Virtual Machine (KVM) hypervisor. KVM is an open source virtualization technology built into Linux, basically turning Linux into a system with both hypervisor and virtual machines (VMs). Although it’s Type 2 (since it runs on top of an OS), it has all the capacities of a Type 1.Let’s Connect!Leah Zitter, PhD, has a Masters in Philosophy, Epistemology and Logic and a PhD in Research Psychology.
While cloud computing has come a very long way since the nascent days of Google App Engine, we’re still only beginning to understand the areas in which cloud technology can make the greatest impact in our lives. One industry that recently took steps toward a more technology-based model is healthcare and wellness, with the adoption of data repositories like Google Cloud Healthcare API for storing medical records and telehealth communication to make doctor’s visits and therapy appointments more accessible. Cloud computing has given providers options when it comes to communication, but the growing popularity of wellness and cognitive behavioral therapy (CBT) apps could soon lead to better data visualization and even personalized treatment when it comes to mental health. What Is an Evidence-Based Mental Health App? Even though it seems like there are apps for anxiety and depression of all shapes and sizes on the market today, the number of evidence-based mental health apps is still relatively small. That’s because, in order for an app to be considered evidence-based, it needs to meet certain requirements by the U.S. Food and Drug Administration or have one randomized clinical research study that supports its effectiveness, as reported by PsychCentral. An evidence-based mental health app would be the biggest improvement because it gives the patient and provider confidence in their diagnosis and treatment. This improvement also saves time in unnecessary diagnostic testing and consultations with specialists regarding knowledge gaps, essentially giving everyone a system that consists of tasks. Complexity ranges from reference retrieval to the processing of relative transactions, complex data mining, and rule-driven decision support systems. This gives all users a trusted, scientific embedded system to back up their mental health data. Useful Features of CBT Apps Many evidence-based mental health apps come with a variety of features that individuals can use to manage or track some area of mental health. From setting reminders for taking medication to deprogram negative thought tendencies, mental health and CBT apps come with a variety of features that can help users build awareness around their mental health. CBT apps have features that can improve our daily habits, willpower, and give us a growth system that improves our way of thinking. Not only do they provide a space for tracking systems, but they are also able to hold journaling and notes on the important impacts of daily habits, creating an overall location where the user has the power to destroy bad habits and start healthier new ones. Self-Management and Tracking One evidence-based mental health app called Medisafe allows users to set alerts to remind them when to take medication. While other CBT apps, such as Worry Knot by IntelliCare, actually use cognitive-behavioral principles like “tangled thinking” to teach users how to manage everyday worries and anxiety.Tracking and self-management can help users understand more about thought patterns and side effects from medication, all of which can help patients and doctors find a treatment path and clinical plan that works best for them, including being able to create therapy techniques for our health and build confidence. Apps for anxiety and depression have even created mindfulness techniques that help with meditation, quick mental start programs, and even SOS buttons if the user is feeling the need of urgency. Data and Analytics Users that want to manage their mental health through better sleep and routine exercise can use an app like Whoop to track their respiratory rate and sleep quality. Whoop is not a CBT or mental health app, but its ability to track patterns in sleep and recovery can help users zero in on the behavioral patterns that may be negatively impacting their health and, by extension, their mental health. Personalized Recommendations Other mental health apps, such as Breathe2Relax, equip users with recommendations for breathing exercises to soothe symptoms of PTSD, general anxiety, and more. Calm is an app made to enjoy listening to therapeutic music and has the ability to track and create sleep patterns. We can’t forget Mood Kit, which allows users to create customized journal entries for moods. With mental health apps expanding and becoming more specific, having many personalized touches bring a more manageable, enjoyable, and convenient way to manage our health. Current Limitations of CBT Apps While apps for anxiety and depression have grown in popularity, many apps are still created with little evidentiary support that they work. Having the ability to track and manage health with CPT Apps can seem like users have the world at their hands, but it's important to commit to oneself. We all need accountability, and we sometimes create more personalized relationships with a friend, family member, or provider, making an overall safe space to push forward. So, we still have to show up and put forth the effort. Being more self-reliant, CPT apps may also not be suitable for people with complex mental health needs or learning difficulties. Some critics actually argue that these apps only address current problems and very specific information that not much of the possibility of underlying causes of mental health is given, such as an unhappy home. Participating and involving with CPT apps can create and build more pressure to face fears, but it takes true honesty to involve oneself in things that eventually can change our lives for the better. How Evidence-Based Mental Health Apps May Improve Treatment and Patient Plans While we may still be in the beginning stages of understanding just how cognitive behavioral therapy apps meaningfully fit into the treatment of mental health, there are some early indications that a hybrid treatment plan could provide more mental health services to rural areas to bridge the mental health treatment gap. Treatment plans are a good place to start when wanting to improve one’s mental health. A mental health treatment plan creates teamwork between patient and provider, which can greatly enhance client engagement. Storing data in evidence-based resources makes the treatment for patient plans more trusted and heavily experienced. Goals, milestones, and timelines make it easier to store information. Providers are now able to know where you will go or maybe are headed. Having credible knowledge brings worldwide sources all in a digital space that is accessible and highly specific to the data captured by CPT apps. Evidence-based mental health apps will be a part of our evolving health systems, with lowering costs for healthcare, easy access to update a network of medical data, creating a more customized experience and road map to care for you. Extra Credit: https://psychcentral.com/blog/top-7-evidence-based-mental-health-apps#1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5897664/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7381081/#:~:text=Published%20reviews%20have%20found%20that,including%20substantial%20heterogeneity%20across%20studies. https://riahealth.com/2019/10/15/mental-health-apps/ http://www.thecbtclinic.com/pros-cons-of-cbt-therapy
C2C Deep Dives invite members of the community to bring their questions directly to presenters.Do you have questions about all the options for securing communication between serverless compute products on Google Cloud? In this C2C Deep Dive, Guillaume Blaquiere (@guillaume blaquiere), cloud architect at Sfeir, covered OAuth 2 token usages between access token and identity token, virtual private cloud (VPC) access and private networks access, load balancers, ingress, and egress. Watch the video to learn how you can start taking control of your serverless infrastructure, and see how Guillaume answers the following common security questions:What about the patch management? How do you manage the network? How do you ensure HA? How do you control the access “from” and “to” the service? How do you mitigate DDoS?Download the slides.
In 1999, Urs Hölzle joined Google as one of its first 10 employees and the first vice president of engineering. Twenty-one years later, he serves as the senior vice president for technical infrastructure and oversees the design, installation, and operation of the servers, networks, and data centers that power Google’s services. In sum, he is the person in charge of making all of Google’s wares available to developers around the world via Google Cloud.Watch the whole interview below.
The Cockroach Labs 2021 Cloud report is dense and data-driven, but you only need to remember one piece of information: Google Cloud was ranked number one. Just kidding! But really, it’s an impressive, thoroughly researched Cloud report and it’s worth your time to give it a read.Now in its third year, the report evaluates the three big players—Google Cloud Platform, Amazon Web Services (AWS), and Azure—and aims to “tell a realistic and universal performance story on behalf of mission-critical online transactional processing (OLTP) applications.”C2C sat down with one of the authors, Product Manager John Kendall, to learn about what Google Cloud Platform does well. “We have no stake in any of the providers, so things surprised me throughout all the providers,” Kendall said. “But what surprised me about Google Cloud Platform was they were the only provider to not have their Extreme PD [sku] option, yet they were still able to outperform across all the benchmarks.” In terms of customer experience and support, Kendall noted how well Google Cloud Platform documented and set public expectations, calling it a “pleasant experience.” "As far as Google's competitors, Kendall noted that AWS was the most cost-effective and Azure advanced ultra disks are worth the investment,” Kendall added. Other notable points from the report include: Google Cloud Platform wins fastest processing rates on four out of four of the report’s benchmarks: network throughput, storage I/O read throughput, storage I/O write throughput, and maximum tpm throughput—a measure of throughput-per-minute (TPM) as defined by the Cockroach Labs Derivative of TPC-C. This is an improvement from ranking third in 2020. For the third year in a row, Google Cloud Platform won the network throughput benchmark, delivering nearly triple the throughput of either AWS or Azure. Notably, Google Cloud Platform’s worst-performing machine for network throughput outpaced both AWS and Azure’s best-performing machines. Google Cloud Platform’s general-purpose disk matched performance of advanced-disk offerings from AWS and Azure.A note from Cockroach Labs: “Our intention is to help our customers and any builder of OLTP applications understand the performance tradeoffs present within each cloud and within each cloud’s individual machines.”Read the full report here and then join our C2C Connect groups to discuss the findings with your peers. Extra CreditCheck out Google’s announcement from Next On-Air, introducing its Extreme PD SKU.
This article was originally published on October 13, 2020.In the third installment of its Rockstar Conversations series, C2C welcomed Urs Hölzle to discuss modernizing infrastructure for the present and for the future—a topic front and center on most people’s minds today. Hölzle, who is one of Google’s first 10 employees, is a leading expert in this topic. In addition to his extensive knowledge and experience in working with creating and further developing Google’s infrastructure, he is the person in charge of design, installation, operations, networks, and data centers, and making it all available to developers around the world.“The most exciting part of my job is that it really changes every year,” Hölzle said. “If you think back to 20 years ago, the internet looked very different. There was no cloud, and we got to build that ourselves…that’s really exciting.” He added, “But in addition to that, another exciting part of my job is that I get to work with the part of the internet that is actually real. I get to work with the unsung heroes behind the scenes that are unknown, but that make things work.”Hölzle spoke about many topics during his hour-long appearance; here are five key takeaways from the lively conversation that took place on the C2C Rockstar stage. People Drive the Innovation at the Enterprise Level The technology is not the only thing that has evolved during Hölzle’s tenure at Google. The company itself has gone through changes, which in turn has pushed innovation forward. “One of the key things of designing the technical infrastructure for Google and for customers is to think about it in terms of how to create things that are scalable and reliable,” Hölzle said. “That is true not just for the technical parts, but really of the people parts of the organization.”Amid constant change, there are two things that have remained steady at Google. First, everything is all about collaboration. “Different disciplines need to come together when building a platform,” Hölzle said. “No one discipline is more important than another, and no one can do it alone.” Second is that there is a culture of blameless postmortems at Google. “When stuff goes wrong, we focus not on blaming who was responsible, but really on completely analyzing it and learning from it, and then putting into place measures to prevent that in the future.” Hölzle hammered home the point that solutions that are great today won’t remain great tomorrow. “Because the internet is constantly changing, the problems are changing, too, and you actually need to continuously adapt,” he said. “To be successful at scaling on an enterprise level, it’s a combination of needing to be really good at things, but at the same time needing to let go of things.” Anthos Really Works—For Everyone and EverywhereAnthos, which is an application platform that provides a consistent development and operations experience for cloud and on-premise environments, was first introduced in 2019 at the Google Next conference in San Francisco.“At Google,” Hölzle noted, “we say that launching something is not important, but landing it is.” He talked about how Anthos needed to land in customers’ hands and actually deliver value, and he believes it has done exactly that. “The real accomplishment for me about ‘Anthos the dream’ is that today it is ‘Anthos the reality.’ I'm really happy that a year and a half or so of general availability we are seeing real traction in the field, and we have production workloads in place and an exploding ecosystem around it.”Through open APIs, Anthos works to standardize tasks or questions that developers work on every day, and then provide these open APIs in a managed form so that it’s updated. “You don’t have to worry about maintaining the stack no matter where it is—whether it is on-premise on bare metal or on VMware, or in the cloud,” Hölzle said. “You can pick your environment and all these things are the same. Your teams learn it once and they can use them anywhere.” The Cloud Should Work for EveryoneHölzle has been quoted as saying, “For the cloud to take over the world, it needs to make everyone successful.” When pressed about what he meant by that statement, he answered, “When we think about the cloud today, we think of it as a technology play. But actually, the cloud is still a niche development. In the grand scheme of things, it’s 10% of the total IT.” He added, “For it to become more widely used, it needs to solve everyone’s problems, not just the problem of a technologist.”The goal is to make things easier in the cloud than they are on premise, and the examples Hölzle used included compliance, productivity, and life cycle of data. “All these things are actually still hard in the cloud, and so if the goal for the cloud is to get to 50%, 70%, or 80% of IT, you have to solve all these technical and nontechnical problems as well. And then it becomes available to a lot of these small companies that consume it indirectly, or large companies with ambitious goals, or heavily regulated companies, and so on.” Hölzle noted that if we were having this same exact conversation in 2025 looking back at 2020, “we would all be pretty embarrassed.” He said the work we are doing now on the cloud is still very primitive and very functional. “Our goal should be to look back five years from now and say, ‘well I remember liking it, but now I kinda don’t understand why because it is so much better today.’” The collective goal should be that the cloud makes everyone successful by solving more problems across the board. “It’s a big challenge, but also a big opportunity,” Hölzle said. The Benefits of Confidential ComputingJust a few months ago, Google announced the expansion of its Confidential Computing portfolio, which makes Confidential Virtual Machines (VM) publicly available, and also now includes Confidential Google Kubernetes Engine (GKE) nodes. When asked what this announcement ultimately meant for customers, Hölzle noted that this move was part of a larger strategy to make cloud computing in Google Cloud Platform and Anthos completely private computing.“We want this to be computing where if you look at it and if you understand the whole stack and the whole technology, you say this is as much my infrastructure as my on-site data center and my own servers and my own set of system administrative laws. There is no provider risk in there,” he said.Hölzle further explained that confidential computing ensures that everything has encryption. “Before confidential computing, one gaffe in encryption was that your data was encrypted when it was stored and when it was in transit on the networks, but when it was being processed in main memory or in the server, it was not encrypted because it needed to be processed. Confidential computing really relies on hardware technology where the CPU always encrypts data before writing it to DRAM (dynamic random-access memory).” The biggest benefit is that there will not be a single bit of unencrypted information. “So, from the end point that holds your keys all the way into the CPU that actually touches your data, everything is encrypted. That’s the vision that we’re working through.” The Cloud Needs Clean EnergyLast month, Google Cloud set a goal to run its business on carbon-free energy everywhere, at all times, by 2030. A blog written by Hölzle noted, “This means we’re aiming to always have our data centers supplied with carbon-free energy. We are the first cloud provider to make this commitment, and we intend to be the first to achieve it, too.”This initiative—which started off as a cost-savings measure—is important for Google Cloud, especially now that the company knows the environmental impacts at stake. “In 2007, I saw that we [Google] was getting bigger, but also that climate and carbon was going to be a problem that was only going to get bigger and more urgent,” Hölzle said. “That’s when we decided to become carbon neutral.”Since 2017, Google Cloud has purchased as much renewable power every year as it uses in all of its operations. “For every kilowatt of power we consume, we purchase a kilowatt hour of renewable energy from a new wind or solar farm,” Hölzle said. As part of its newest goal set forth this year, Hölzle noted in his blog that as Google learns by doing, it will also help develop useful tools to empower others to follow suit. “In the last decade, we've led the way in deploying renewable energy at scale—and, in the process, helped drive down costs for wind and solar. It’s time to do the same for next-generation technologies that will allow for a wholesale transition to 24/7 carbon-free energy.”
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.OK
Sorry, our virus scanner detected that this file isn't safe to download.OK