Browse articles, resources, and the latest product updates.
Intel and Google Cloud technologies are fueling the scalability, performance, and security of the Workspot Cloud PC platform. Organizations can implement these technologies to reimagine end-user computing. For this 2Learn event, C2C invited Intel and Workspot to showcase how end-user computing on Google Cloud enables Workspot to deliver reliable, low-latency cloud PCs to users across the world, any time, from any device. Guests heard how to increase virtual CPU performance by 27% with powerful Intel® Xeon® Scalable processors.The speakers explored how Intel leverages Workspot’s: 100% cloud-native approach to VDI Migration on Google Cloud infrastructure Zero-trust security and on-demand scalability Modern approach to VDI Migration with cloud PCs Cloud adoption, infrastructure framework, and center of excellence migration tools and methodology Watch a full recording of this event below: Extra Credit:
“Until your event,” said Alejandro Lorenz, Lead Software Architect at IT4IPM, in a follow-up interview after attending C2C’s first Cloud Adoption Summit in London on November 30th, “I didn’t think that we had a problem.” Now, Alejandro says, after hearing from other companies at the summit about the investments they had made in their cloud migrations and their faster migrations’ impact on their overall management, he realizes that his own organization’s top management “must fully invest in the migration” to Google Cloud.The Cloud Adoption Summit was the first event of its kind for C2C: a day-long event featuring a keynote address, several industry panels and interactive breakout sessions, and numerous opportunities to network over meals and drinks with the one-hundred-plus guests and two-dozen-plus speakers in attendance. Many of the sessions were hosted by C2C partners, including Deloitte, Palo Alto Networks, Aiven, Workspot, and Appsbroker, but some of Google Cloud’s biggest customers, like Deutsche Bank and GEMA, also appeared to present and discuss their business and technical initiatives. Topics ranged from cybersecurity and infrastructure modernization to sustainability lessons from the last ten years of cloud adoption.Breakout session with Aivenand Elwood Technologies“Being at your event and hearing other companies and what they are doing helped me,” Alejandro said. Many of the attendees shared similar sentiments about the content of the presentations and the breakout sessions. “It was really important for me to attend Ellwood's CISO talk. We are a small team, as they are. And we need to go for full automation, to automate everything as they did,” added Tobias Hingerl (@THingerl), Alejandro’s colleague at IT4IPM. Tobias was referring to a session called “Scaling Fast but Secure,” a conversation with Daniel Jones of Elwood Technologies led by C2C partners Aiven. The kind of insight Tobias is describing––that increasing investment in transformative cloud technologies can present new opportunities for growth, efficiency, and security––is the kind of insight the Cloud Adoption Summit and the Google Cloud Customer Community itself exist to make possible.For many of the guests, the keynote by John Abel, Technical Director in Google Cloud’s Office of the CTO, provided not just insight but inspiration. “It was fascinating to hear John talking about Sustainability,” said Damien St. John, European Sales Engineer Lead at Appsbroker. “Not only for the impact we have on the environment...it helped me understand the marketing perspectives.” Jeremy Norwood, COO of Skytra Ltd., agreed. “John’s numbers made me think about the impact of our business on the environment. Sustainability is a big thing for sure. I have the sense that Google is a leader and they want to continue.” For Workspot Sales Engineer John Samuel (@C2Csamuelj), the call to action was immediate. “When I returned home, I went to my PC and changed my screen settings to dark mode,” he said. “I checked the energy consumption before and after. I was amazed. His talk gave us food for thought.” “Being at your event and hearing other companies and what they are doing helped me.” Other guests found the most value in the networking opportunities the event provided, which encouraged sharing ideas and making plans to follow up. Tobias and Alejandro are looking forward to catching up with Palo Alto about how their platform can help IT4IPM, and with Appsbroker about hiring new security engineers. John says Workspot made several new connections, and he’s excited to get in touch with Robert Burton (@Rob Burton) of Bupa about a solution for Workspot’s cloud desktops. Damien is anticipating a training opportunity with Google and more than one new business opportunity, including one with Mastercard.Guests networking between sessionsFor Daniel Vaughan (@daniel.vaughan) of Mastercard, the presentations and the networking at the event provided an entirely new way of looking at cloud adoption. In the conversations Daniel observed and joined, a major theme emerged: the biggest barrier to cloud adoption is employee skills. This theme was provocative enough for Daniel that he published an article on the C2C platform exploring his experience at the Cloud Adoption Summit and what he learned about the importance of skills in the cloud adoption context. “No matter how good the technology is,” he says, “without experienced people who are using it and, most importantly, can show others how to use it well, its adoption will be limited.”For the organizations who adopt it, the cloud makes storage and resources available all over the world, providing a solution to fit every need. A community like C2C does the same thing with people. At the Cloud Adoption Summit, the entire C2C community came together in one place to connect. Now those connections exist across the globe, and the people who made them can access them at any time. You and your organization can make and benefit from these connections, too. Join C2C as a member today to become a part of our global community. We’ll see you at the next Cloud Adoption Summit. Extra Credit:
On November 30, 2022, I attended the Google Cloud Adoption Summit at Google's offices in London. C2C Global, The Google Cloud Customer Community, organized the event. Although different aspects of cloud adoption were covered, the part that stood out for me from the sessions I attended and hallway conversations was training and enablement. Enablement has never been the core of my role––there have always been delivery, strategy or pre-sales aspects that take priority––but it has always been a favorite.My career's most memorable and rewarding highlights have been related to enablement. One of these was when I visited a company and met an engineer with a copy of a book I had written, full of post-its and handwritten notes in pencil. Another was the people that thanked me for the value they got from the internal Technical Seminars Program I organized at EMBL-EBI, who all went on to get great jobs in tech when their contracts ended. This direct impact on the lives of individuals is what attracts me to the work I do.Although Google is the number three public cloud, I believe Google recognizes that a lack of skills in the market is the main factor holding back Google Cloud adoption, and is addressing this from three directions:The excellent top-down work Google is doing by creating great content with Developer Advocates like Stephanie Wong (@stephr_wong) on YouTube. The new Google Cloud Skills Boost program with Qwiklabs provides masses of quality material for an affordable yearly subscription in a way similar to how ACloudGuru did for AWS. The enabling of partners for training delivery and customer enablement to meet customers at their level. This includes C2C itself. These partners complement impressively knowledgable Google customer engineers such as the ones I met at at the recent Google Cloud Next Developers Day. The support of the developer community to build capabilities bottom-up by encouraging them to freely experiment and learn more about Google technologies. Initiatives include Google Developer Groups (GDG) in the tech community, Google Developer Student Clubs (GDSC) in universities, Women Techmakers (WTM), and supporting Google Developer Experts (GDE).No matter how good the technology is, without experienced people who are using it and, most importantly, can show others how to use it well, its adoption will be limited. Google recognizes and is addressing this well. “How will there be time made for enablement, and who will do the enabling?” At the C2C event, Deloitte talked about several alternative approaches to building skills, from building Tech Hubs (centers of excellence) in organizations with specialists that support existing teams to Cloud Academies where new entrants to the industry, often from non-traditional backgrounds, are put through extensive training. While there is great value to that later, as it brings diverse experiences into the industry, this approach must be combined with other initiatives. I cannot help but remember the "paper MCSEs" of the later 90s, where people with no industry experience paid for six-week courses that got them through Microsoft Certified System Engineer certification. This then led to many self-described “IT refugees” who left the industry as the market turned in the early 2000s and outsourcing took hold.Sam Caley, Cloud Program Lead at Deutsche Bank, made a good point in one session: for Deutsche Bank, it’s important to have a deep knowledge of the existing applications combined with cloud knowledge. This means upskilling the existing people who may have been working with these applications for the last ten years rather than bringing in new people with cloud experience alone.I agree with Sam; with core financial systems, stability and security are non-negotiables, and a team working on a cloud migration needs to know what they are doing. There needs to be both a deep understanding of the application and experience with cloud-native principles. A lift-and-shift or even a move-and-improve is not going to cut it.Cloud Adoption SummitDeloitte Industry PanelThis leaves me with two questions: how will there be time made for enablement, and who will do the enabling?In terms of time, when I led an engineering team on our first cloud-native project with Kubernetes on AWS, it took six months for the team to become comfortable with the new architecture and development style. I believe, from my experience at HCL Cloud Native Labs with Alan Flower, that ideally, I would want up to six weeks with a team working hands-on on practical projects or "Game Days", as AWS calls them as well as formal training to build capabilities.This is a similar time investment to the people in Cloud Academies learning from scratch. Pivotal Platform Accelerate Lab for PCF, for example, which offered this type of combination of training and hands-on practice, ran for three weeks, and it was expensive. Who does the "day jobs" of the people that need to be upskilled during this time? There seems to be no slack in the system, with many organizations struggling to hire or retain enough people to keep the lights on already, so who will cover for those who are training?Then, who will do the enabling? With the demand for skilled people, there are plenty of positions for people happy to do "just engineering" with high salaries and good conditions. Why would experienced practitioners want to add the complication of training to their capabilities? Talking to Carl Tanner (@Grassycarl), Google Global Head of Learning Partnerships, I learned that people who are both active, experienced practitioners and skilled trainers are few and far between, with Google, worldwide, only employing 20 of these people themselves. Google is addressing this problem through partnerships. “No matter how good the technology is, without experienced people who are using it and, most importantly, can show others how to use it well, its adoption will be limited.” I also spoke to Mike Conner (@Mike Conner) of Appsbroker, one of Google's main partners in the UK, and a training provider. AppsBroker's approach is to enable the organization rather than just train individuals, which seems sensible. Establishing communities of practice to leave a legacy after training with ongoing support is a good idea. This worked well at EMBL-EBI, where after seminars and workshops, we were keen to get the people who were interested in going further into communities of practice to keep the momentum going. The issue I see, however, is Google Certified Trainers need to be affiliated with a training provider to be able to be trained themselves. As Carl told me, this means these people tend to be contractors wanting "portfolio careers" as both trainers and practitioners. This seems to be a limited pool.Enablement is not a zero-sum game. For training providers, there is no shortage of people to train, but having trained people benefits the ecosystem as a whole. I would love to see more collaboration between partners and community groups, for example. My view for recruitment of rare, valuable skills is to nurture a community and recruit from it. Google Cloud is looking to train 40 million people in cloud skills. This is a massive number. If this is going to happen, these barriers need to continue to be removed with the people who are in a position to enable supported, utilized, and rewarded, and with resources shared as freely as possible.In all, this was a very interesting day. I did not expect to leave with so many insights on training and enablement, but I am glad I did. This is a significant opportunity for Google Cloud, and it must apply to other cloud providers and extend to platform vendors such as IBM, Red Hat, and VMWare Tanzu, as it is more about the techniques and experience of cloud-native architecture and development than any particular implementation. As always, IT comes down to being a people issue. Do you think skills are the biggest barrier to cloud adoption? How is enablement accomplished at your organization? Let us know in the replies, or better yet, post in our community and tell us your story. Also, make sure to check our platform in the coming weeks for more coverage of our first Cloud Adoption Summit.
The centerpiece of C2C’s virtual Earth Day conference, Clean Clouds, Happy Earth, was a panel discussion on sustainability in EMEA featuring C2C and Google Cloud partners HCL and AMD and cosmetics superpower L’Oreal. Moderated by Ian Pattison, EMEA Head of Sustainability Practice at Google Cloud, the conversation lasted the better part of an hour and explored a range of strategies for enabling organizations to build and run sustainable technology on Google Cloud.According to Sanjay Singh, Executive VP of the Google Cloud Ecosystem Unit at HCL technologies, when advising customers across the value chain evaluating cloud services, Google Cloud becomes a natural choice because of its focus on sustainable goals. Connecting customers to Google Cloud is a key part of HCL’s broader program for maintaining sustainable business practices at every organizational level. “What you cannot measure, you cannot improve” says Singh, which is why HCL has created systems to measure every point of emission under their purview for carbon footprint impact. In alignment with Google Cloud’s commitment to run a carbon-free cloud platform by 2030, HCL plans to make its processes carbon neutral in the same timeframe.Suresh Andani, Senior Director of Cloud Vertical Marketing at AMD, serves on a task force focused on defining the company’s sustainability goals as an enterprise and as a vendor. As a vendor, AMD prioritizes helping customers migrate to the cloud itself as well as making its compute products (CPUS and GPUS) more energy efficient, which they plan to do by a factor of 30 by 2025. On the enterprise side, Andani says, AMD relies on partners and vendors, so making sure AMD as an organization is sustainable expands to its ecosystem of suppliers. One of the biggest challenges, he says, is to measure partners’ operations. This challenge falls to AMD’s corporate responsibility team.Health and beauty giant L’Oreal recently partnered with Google Cloud to run its beauty tech data engine. In the words of architect Antoine Castex, a C2C Team Lead in France, sustainability at L’Oreal is all about finding “the right solution for the right use case.” For Castex, this means prioritizing Software as a Service (SaaS) over Platform as a Service (PaaS), and only in the remotest cases using Infrastructure as a Service (IaaS). He is also emphatic about the importance of using serverless architecture and products like AppEngine, which only run when in use, rather than running and consuming energy 24/7.For Hervé Dumas, L’Oreal’s Sustainability IT Director, these solutions are part of what he calls “a strategic ambition,” which must be common across IT staff. Having IT staff dedicated to sustainability, he says, creates additional knowledge and enables necessary transformation of the way the company works. As Castex puts it, this transformation will come about when companies like L’Oreal are able to “change the brain of the people.”As Castex told C2C in a follow-up conversation after the event, the most encouraging takeaway from the panel for L’Oreal was the confirmation that other companies and tech players have “the same dream and ambition as us.” Watch a full recording of the conversation below, and check back to the C2C website over the next two weeks for more content produced exclusively for this community event. Also, if you’re based in EMEA and want to connect with other Google Cloud customers and partners in the C2C community, join us at one of our upcoming face-to-face events: Extra Credit:
Results of the 2022 ASUG Pulse of the SAP Customer Research among Google Cloud Platform users identified key insights for organizations in 2022, including top focus areas and challenges experienced when migrating SAP instances to cloud. View image as a full-scale PDF here. Extra Credit:
On Jan. 13, 2022, The C2C Connect: DACH group invited Michel Lovis of TX Group to their community gathering to give a presentation about TX Group’s migration from Microsoft Suite to Google Workspace. After an introduction from co-host and DACH team lead Chanel Greco (@chanelgreco), Michel analyzed the digitization process, the challenges TX Group faced, and the measures they took to ensure that the effort would succeed. Below are summaries of some of the key points covered during the session: 1. TX group has evolved from being a newspaper-only company in 1893 to becoming an internationally recognized network of media and platforms. 2. TX Group has become the largest private digital network platform in Switzerland, reaching over 80% of the population, with 3,700 employees, around 500 technology experts, and 800 journalists from over 50 nations, and their digital revenue share is 53%. 3. TX Group today consists of Tamedia (paid media), 20 Minuten (free media), Goldbach (advertising), and TX Markets AG (market places), all of which are using scalable technology architecture in a federated organizational setup (cloud first/only, with strong push for agility & speed). 4. In 2015, the company shifted workspace operations from Microsoft to Google. The goal of the project was to get all users to adopt most of the Google Workspace applications, including Sheets, Docs, Slides, and Meet, thus making a big change toward the digital environment they have today. 5. Vision vs. reality: close customer care is key! The challenge of migrations is that it takes time for people to lose their original workspace and get used to change. Today TX Group retains many Microsoft installations, and will retain them long-term in some areas, departments, and Teams. 6. TX Group introduced the following measures and resources after launching Workspace: An internal Google CC with Google Experts Business proximity concept implementation Welcome info for new employees Knowledge-sharing and other help offerings Roadshow coffees Inviting people to express questions via management care and a satisfaction survey Specific courses including transformation lab 7. The second bigger change was the implementation of Goldbach. The bigger challenges here included the employees integrating a new company, which required a complete change of their working environment. 8. TX Group identifies six main different measures that were taken in order to make the process easier. 9. At the end of the session, the guests shared the benefits from their Google journey and the areas that would need a closer look. 10. Key takeaways from the session included: act faster, become more open, try something new, and "Pull faster than your shadow IT". Watch a full recording of the event below:
In early 2021, Rich Hoyer, Director of Customer FinOps for SADA, published an opinion piece in VentureBeat that refuted the findings of an earlier published article about the cost of hosting workloads in the cloud. In his rebuttal, Hoyer called the article (which was written by representatives of Andreessen Horowitz Capital Management) “dead wrong” with regard to its findings about cloud repatriation and costs.Hoyer’s expertise and his views on doing business in the cloud make him an ideal participant for a C2C Global panel discussion taking place on January 20, at which he will appear alongside representatives of Twitter and Etsy to talk about whether or not enterprises should consider moving workloads off the cloud and into data centers. Hoyer predicts the panel conversation will lean away from the concept of repatriation and more toward the concept of balancing workloads.“I don’t think repatriation is the right term,” Hoyer says. “To me, it’s much more a decision of what workloads should be where, so I would phrase it as rebalancing—as more optimally balancing. Repatriation implies that there’s this lifecycle. That’s just not the way it works. How many startups have workloads that are architected from the ground up and not cloud native? You don’t see that. If you’re cloud native, you start using the stuff as cloud native.” The panel discussion will focus on hybrid workloads, he says, with a specific eye toward what works from a cost standpoint for each individual customer. “We want cloud consumers to be successful, and if they have stuff in the cloud that ought not to be there, they’re going to be unhappy with those workloads,” Hoyer says. “That’s not good for us, it’s not good for Google, it’s not good for anybody. We want only things in the cloud that are going to be successful because customers know they’re getting value from it, because that’s what’s going to cause them to expand and grow in the cloud.”From his FinOps viewpoint, Hoyer says he will be advocating for the process of making decisions around managing spend in public cloud, and the disciplines around making decisions in the cloud. “The whole process of trying to get control of this begins with the idea of visibility into what the spend is, and that means you have to have an understanding of how to report against it, how to apply the tooling to do things like anomaly alerting,” he says. I expect the discussion to be less about whether there should be repatriation, and the more constructive discussion to be about the ways to think about how to keep the balance right.” The overall goal of the panel is to present a process for analyzing workloads. And according to Hoyer, that’s not a one-time process—it’s iterative. “I’ll encourage anyone who has hybrid scenarios—some in the data center and some in the cloud—to be doing iterated looks at that to see what workloads should still be in the cloud,” Hoyer says. “There should be an iteration: Here’s what’s in the cloud today, here’s what’s in the data center today, and in broad terms, are these the right workloads? And then also, when stuff is in the cloud, are we operating it efficiently? And that’s a constant process, because you’ll have workloads that grow from the size they were in the cloud. And we’ll hear that same evaluation from the technology standpoint—are we using the best products in the cloud, and are there things in the data center that ought not to be there?”Be sure to join C2C Global, SADA, Twitter, and Etsy for this important conversation and arm your business with the tools needed to make intelligent and informed decisions about running your workloads and scaling your business. Click the link below to register.
With migration to the cloud continuing across the public and private sectors at an accelerating rate, stories of successful migration projects are becoming especially timely and valuable. Organizations considering migration want to hear from organizations that have executed the process successfully. As these stories emerge with increasing frequency, sharing them within and among communities like C2C becomes not only natural but necessary.As we initially reported this October, NextGen Healthcare recently partnered with Managecore to simultaneously migrate their SAP applications from a private to a public cloud infrastructure and upgrade to the SAP HANA database. This was an ambitious migration project, and given the regulations around NextGen’s personally identifiable data, failure was not an option. Despite these unique considerations, the team completed the project in under six months. On October 28, 2021, C2C’s David Wascom connected with Karen Bollinger of NextGen Healthcare and Frank Powell of Managecore for a virtual C2C Navigator event exploring the background and the details of this successful project.The conversation began the way a migration process itself begins: the team established customer goals. When Wascom asked what customers typically want from a migration, Powell offered three main goals common to organizations considering migration: greater stability, lower fees and personnel costs, and “time to innovate and do new things for their organization.”After wrapping up this high-level overview, Wascom asked Bollinger and Powell for a more detailed description of the migration process. Bollinger outlined the main phases of the migration period, from moving the infrastructure from cloud to cloud, to updating the landscape to the latest service pack, to moving everything into the HANA database. Powell stressed the importance of the preliminary phase of the migration, including testing and defining SAP strategy.The discussion became most lively when Wascom asked Powell and Bollinger about their data security strategy. As a healthcare provider, NextGen is beholden to HIPAA and attendant ethical and legal considerations concerning data security. “Security is on everyone’s mind, even on-prem,” said Powell. Bollinger was equally unequivocal, if not more so. “I have no choice,” she said. “I’m in healthcare.”What does it take to migrate a massive quantity of sensitive data successfully and securely? According to Bollinger, it takes a trusted partner. “What I was looking for was a partner,” she said. “A third-party partner that we could have these conversations with.” The sentiment resonated with Wascom, who added, “The fact that you were able to work towards a common goal is a hugely powerful story.” Powell agreed wholeheartedly. For him, partnership is not just a goal, it’s a requirement. “As a service provider, our goals have to align with our customers,” he said. “If they don’t, then right from the get-go, we have failed.”When Wascom asked Bollinger and Powell for final reflections and advice for other executives considering migrating their own organizations, both responded positively and succinctly. The biggest takeaway for Bollinger? “It can be done.” Powell was similarly encouraging. “Talk to someone who’s been successful at it,” he said. “Use those as your reference points.” The reason for this, in his words, was just as simple: “We’re dealing with some pretty amazing technology.”C2C brings people like Bollinger and Powell together to demonstrate the potential of cloud technology for organizations seeking solutions and success. How is your organization hosting its software and data? Have you considered a migration to the cloud, or to a different cloud infrastructure? Would you like to hear from other organizations where similar projects have been successful? Reach out and let us know what you’re thinking, and we’ll incorporate your thoughts as we plan future discussions and events. Extra Credit:
Managecore, a Foundational Gold Partner of C2C and Premier Google Cloud Partner, recently collaborated with NextGen Healthcare to migrate SAP to host on Google Cloud. In less than six months, Managecore supported moving NextGen’s SAP workloads in addition to upgrading to the latest version of HANA Here to discuss the project on the C2C virtual stage were panelists from each company:Karen Bollinger — Vice President Business Applications, NextGen Healthcare Frank Powell — President/Partner, Managecore Key Discussion Points:An introduction to NextGen Healthcare and the problems they were trying to solve by introducing an hyperscaler to their SAP environment and partnering with Managecore Using managed services from Google Cloud to open up new agile business opportunities and improved performance, confidence, stability, and availability Considerations for security and HIPAA compliance when migrating a healthcare company’s SAP data workloads to a new cloud environmentWatch the entire conversation here:
Migrating SAP applications to the cloud can be a complicated, time-consuming undertaking. The road to a successful cloud migration project and a stable cloud environment is often filled with twists, turns, and hurdles. Yet, there are steps your organization can take to ensure success. Earlier this year, NextGen Healthcare migrated from a private cloud landscape to a public cloud landscape with Google Cloud while also upgrading to the SAP HANA database in the same project. “This project is not as simple as moving to a different location,” said Karen Bollinger, Vice President of Business Applications at NextGen Healthcare.To ensure a successful migration project, the healthcare technology organization partnered with Managecore, a technical managed services company focused on SAP. Bollinger emphasized that collaboration was one of the keys to the project’s success and set NextGen Healthcare up with a stable cloud landscape and laid a foundation for future growth on Google Cloud.“If done properly, the promise of the cloud can truly be achieved,” he said. “You just need the right team.” The Need for a Change Before beginning this project, NextGen Healthcare had been leveraging SAP for about a decade. The company was running several SAP solutions, including SAP ECC, SAP Business Warehouse, SAP Business Planning and Consolidation, SAP Financial Accounting, and SAP Controlling. NextGen Healthcare already had an existing partnership with Managecore when Bollinger approached the organization to assist with doing some security-focused work on NextGen Healthcare’s SAP landscape.The conversations between the two organizations evolved into how NextGen Healthcare would transition to Google Cloud. NextGen Healthcare had been thinking about moving its SAP landscapes from another hyperscaler. Bollinger noted that NextGen Healthcare hoped to work with a managed service provider that offered increased transparency and more flexibility with their cloud environments. Making the Transition In addition to migrating to Google Cloud, Managecore also updated NextGen Healthcare’s SAP database, implementing SAP HANA in under six months.“When we are moving organizations to the cloud, we are always trying to get the biggest bang for our buck,” Powell said.This led NextGen Healthcare to see a significant improvement in the stability of its SAP landscape, better up times, and overall improved performance. Managecore also helped NextGen Healthcare decrease its monthly hosting costs and gave the organization a foundation to improve its SAP landscape in the future.“The world is their oyster,” Powell said. “NextGen Healthcare is in a perfect position, from a technology standpoint, to take advantage of the Google Cloud Platform.”Bollinger noted that this transition has NextGen Healthcare in a position to migrate from SAP ECC 6 to SAP S/4HANA, giving them both the ability and the agility to tackle that project in the future. Keys to Success According to Bollinger, one of the keys to this project’s success was having Managecore as a partner.“You need a great partner,” she said, emphasizing that organizations need to collaborate with partners that have both expertise and experience. Powell highlighted the caliber of the team working on this migration, noting that successful teams need to know how to “tune” SAP applications to run in Google Cloud efficiently. Both Bollinger and Powell emphasized that this was a collaborative effort and that the project’s success is due to the expertise and partnership among the project’s team. Learn More About Success in Google Cloud While many organizations are migrating their SAP workloads to Google Cloud, some are still showing trepidation about tackling such an expansive and complex project.“If you haven’t thought about moving to the cloud or you aren’t convinced, talk to someone who has been successful with one of these projects,” Powell said.Both Bollinger and Powell will be sitting down on Oct. 28 at 11 a.m. CT for a C2C Navigators webcast focused on this project. They will be going into further depth about the ins and outs of their success, and they’ll be able to help attendees figure out how to complete a successful and fast migration. They’ll also discuss how the two organizations have worked together to ensure this success continues after the go-live—interested in digging deeper into this story? Register here and save your spot! Extra Credit: Looking to connect with your peers or expand your network? Join the SAP on GCP Community here on C2C.
In 2019, the public cloud services market reached $233.4 billion in revenue. This already impressive number is made even more impressive by the fact that this was a 26% year-over-year increase from the previous year; a strong indication that app modernization and cloud migration continue to be winning strategies for many enterprises.But which cloud strategy should a decision-maker choose? When should they migrate their legacy applications into a hybrid, multi-cloud, or on-premise architecture? There may not be single definitive answers to these questions, but there are certainly different options to weigh and considerations to make before officially adopting a new process. Read on to find out more about multi-cloud vs hybrid cloud strategies for startups, and join the conversation with other cloud computing experts in the C2C Community. What is a Hybrid Cloud Strategy? A hybrid cloud strategy is an internal organization method for businesses and enterprises that integrates public and private cloud services with on-premise cloud infrastructures to create a single, distributed computing environment.The cloud provides businesses with resources that would otherwise be too expensive to deploy and maintain in house. With on-premise infrastructure, the organization must have the real estate to house equipment, install it, and then hire staff to maintain it. As equipment ages, it must be replaced. This whole process can be extremely expensive, but the cloud gives administrators the ability to deploy the same resources at a fraction of the cost. Deploying cloud resources takes minutes, as opposed to the potential months required to build out new technology in house. In a hybrid cloud, administrators deploy infrastructure that works as an extension of their on-premise infrastructure, so it can be implemented in a way that ties into current authentication and authorization tools. What is a Multi-Cloud Strategy? Conversely, a multi-cloud strategy is a cloud management strategy that requires enterprises to treat their cloud services as separate entities. A multi-cloud strategy will include more than one public cloud service and does not need to include private services, like in the case of hybrid cloud. Organizations use a multi-cloud strategy for several reasons, but the primary reasons are to provide failover and avoid vendor lock-in. Should one cloud service fail, a secondary failover service can take over until the original service is remediated. It’s an expensive solution, but it’s a strategy to reduce downtime during a catastrophic event. Most cloud providers have similar products, but administrators have preferences and might like one over another. By using multiple cloud services, an organization isn’t tied to only one product. Administrators can pick and choose from multiple services and implement those that work best for their organizations’ business needs. What is the Difference Between a Hybrid and Multi-Cloud Strategy? Though the differences might be slight, choosing the wrong cloud strategy can impact businesses in a big way, especially those just starting out. One of the primary differences between a hybrid and a multi-cloud strategy is that a hybrid cloud is managed as one singular entity while a multi-cloud infrastructure is not. This is largely due to the fact that multi-cloud strategies often include more than one public service that performs its own function.Additionally, when comparing multi-cloud vs. hybrid cloud, it’s important to note that a hybrid cloud will always include a private cloud infrastructure. Now, a multi-cloud strategy can also include a private cloud service, but if the computing system is not managed as a single entity, it is technically considered both a multi-cloud and a hybrid cloud strategy.Infrastructure is designed differently, but the biggest significance is cost. Hosting multi-cloud services costs more than using one service in a hybrid solution. It also requires more resources to support a multi-cloud environment, because it’s difficult to create an environment where services from separate providers will integrate smoothly with each other, and requires additional training for any staff unfamiliar with cloud infrastructure. Which Cloud Strategy Has the Most Business Benefits? Every cloud strategy has its benefits, and most organizations leverage at least one provider to implement technology that would otherwise be too costly to host in-house. For a simple hybrid solution, use a cloud service that provides a majority of the resources needed. All cloud services scale, but you should find one that has the technology that you need to incorporate into workflows.Multi-cloud is more difficult to manage, but it gives administrators better freedom to pick and choose their favorite resource without relying on only one provider. A multi-cloud strategy also provides failover should a single provider fail, so it eliminates the single point of failure that most hybrid solutions experience. A cloud provider has minimal downtime, but downtime occasionally happens. With a multi-cloud strategy, administrators can keep most business workflows working normally until the primary provider recovers.It’s hard to stand squarely on the side of one cloud strategy over another. Every business has its own unique variables and dependencies that may make a hybrid model more desirable than multi-cloud, or vice versa. The benefits of an on-premise cloud infrastructure may also outweigh those of both hybrid and multi-cloud. The decision to go hybrid or adopt a multi-cloud strategy resides with the decision-makers of said enterprise. There are, however, some considerations businesses of any size and lifecycle can take into account before finalizing the decision. What to Consider When Switching to a Hybrid Cloud Strategy Before choosing a provider, you should research each provider’s services, feedback, and cost. It’s not easy to choose a provider, but the one integrated into the environment should have all the tools necessary to enhance workflows and add technology to the environment. A few key items that should be included are: Authorization and authentication tools Speed and performance metrics Backups and failover within data centers Different data center zones for internal failover Logging and monitoring capabilities Usage reports Convenient provisioning and configuration Most cloud providers have a way to demo their services, or they give users a trial period to test products. Use this trial wisely so that administrators can determine the best solution for the corporate environment. Multi-Cloud Vs. Hybrid Cloud for StartupsAgain, deciding between a multi-cloud strategy vs. hybrid cloud strategy depends on the needs of the company. For startups, there may need to be a greater emphasis on security and disaster recovery, in which case a multi-cloud management strategy would provide a company at the beginning of its lifecycle the protection it needs to grow.Conversely, to bring up one of the key differences between a hybrid cloud and multi-cloud strategy, if an entity uses private cloud services, a hybrid cloud model would provide the startup with the flexibility it needs to make changes to their computing infrastructure as they become more established. Do Startups Benefit From an On-Premise Cloud Infrastructure?The short answer is yes, startups can benefit from an on-premise cloud infrastructure. Taking any services in-house, whether it's managing payroll or IT services, can help reduce costs and give businesses more visibility into their workflow. If there is a need to hold on to an on-premise cloud infrastructure, a multi-cloud strategy will allow that enterprise to maintain that computing system while also managing additional public cloud services separately. What Does the Resurgence of IT Hardware Mean for Cloud? Even though cloud adoption has been surging for some time among businesses (Gartner reported in 2019 that more than a third of organizations view cloud investments as a “top 3 investing priority”) IT hardware and in-house services have also experienced a resurgence in popularity. Many believe this new phenomenon, referred to as cloud repatriation by those in the IaaS (Infrastructure as a Service) industry, is the result of a lack of understanding around proper cloud management and containerization among IT decision-makers. They may initially make the choice to migrate certain applications into a hybrid cloud strategy only to abandon the effort because of workload portability. In light of this shift, hyphen-cloud strategies, like multi-cloud vs. hybrid cloud, still reign supreme as a cost effective and secure way to manage legacy applications and workloads. It may take a fair amount of planning and strategizing to decide which cloud strategy matches the company lifecycle to which it applies, but cloud adoption certainly isn’t going anywhere any time soon.
For an ever-growing number of companies, cloud migration is quickly becoming a question of when not if. The potential benefits of cloud migration are undeniable, from long-term cost-cutting to better performance across key metrics. However, cloud migration is not a simple process. Many strategies are available for companies opting to migrate to the cloud, and each comes with its ideal conditions, potential benefits, and risks.Cloud migration also involves different considerations depending on the prospective cloud environment. As a result, choosing to migrate to Kubernetes will entail unique implications for a company's migrating resources. Read on for more information about the different cloud migration strategies and the key points to consider when preparing to migrate to Kubernetes. What is a Cloud Migration Strategy?A cloud migration strategy is a plan employed by an organization or a team to migrate applications and enterprise data from the original hosting system to the cloud. Cloud Migration TypesCloud migration can be broken down into six different types: Rehost, Re-Platform, Repurchase, Retain, Refactor, and Retire. The cloud migration type you choose depends heavily on several factors. For one, it should reflect the type of data you need to migrate. Additionally, teams also need to consider the size of the organization and workload level and the current digital environment. Then, once that data and those applications have been successfully migrated onto the cloud, teams can create a Kubernetes migration strategy to manage cluster traffic effectively. RehostRehosting is one of the most specific cloud migration types, best suited for placing virtual machines and operating systems onto a cloud infrastructure without any changes. Rehosting refers to "lifting" from the hosting environment and "shifting" to a public cloud infrastructure. When rehosted, resources can be recreated as IaaS analogs so that software will run on these resources on the new platform as before.Rehosting is an expected initial step for organizations starting a new migration. This migration strategy is low-resistance and well suited to specific constraints or deadlines or a circumstance that requires completing the job quickly. It can be fast and efficient in these circumstances. However, migrated applications often require re-architecture once rehosted, and sometimes applications rehosted wholesale retain poor configuration. Re-PlatformRe-platforming apps and data requires optimization of APIs and operating systems. For example, suppose teams are using multiple virtual machines to manage applications. In that case, a re-platforming migration strategy could allow them to switch to a platform with the ability to manage multiple workloads simultaneously, like Kubernetes. Replatforming to Kubernetes involves breaking resources down into distinct functions and separating them into their containers. Unique containers can be designed for each service. Containerizing these resources prepares them for optimal performance in the cloud environment, which can't be achieved as part of the rehosting process. RepurchaseRepurchasing is a similar migration strategy to rehosting and re-platforming but more focused. Repurchasing involves optimizing individual components of applications that otherwise migrate as-is. For example, switching an application from self-hosted infrastructure to managed services or from commercial software to open-source is common. Resources should be tested and then deployed and monitored for crucial performance metrics if migrated this way. RetainAfter assessing the current state of your legacy systems and in-use application, your team may determine that maintaining hybridity makes the most sense for modernizing application development and hosting. In this case, teams can employ a retention cloud migration model or a hybrid model to optimize in-use applications that need improvement without disrupting the systems performing effectively. RefactorRefactoring rearchitects software during the migration process to optimize it for performance in the cloud environment once migrated. While re-platforming cuts down on the architecture that will need to occur after migration, refactoring integrates the re-architecture into the migration process as completely as possible. Refactoring requires significant resource investment on the front end. However, it yields a greater return on investment in the long run: the resources invested upfront end up cutting down on costs once the migration is complete and software is performing at optimal levels. Applications also continually modify to adjust to new requirements. RetireWhen certain applications are not being put to valuable use, or if they have been made redundant by applications that provide the same services, these applications can be retired before cloud migration. This can be a step in the migration process. Preparing for migration via one of the migration strategies described above may require retiring specific applications. Assessing available software before migration and determining which applications can and can't be withdrawn can be beneficial when possible and convenient. Managing Applications With KubernetesCreating these multi-cloud and hybrid-cloud environments requires modernizing application management and the adoption of DevOps. Many organizations and teams with a cloud strategy manage their applications, workloads, deployments, and data with open source tools like Docker and Kubernetes. And while choosing Docker vs. Kubernetes is entirely dependent on the preference of the DevOps team, Kubernetes offers a level of scalability and flexibility that makes it one of the most popular container orchestration tools on the market. However, that's not to say that issues managing cluster traffic and migration don't occasionally occur, in which case creating a Kubernetes migration strategy can help. Creating a Kubernetes Migration Strategy to Avoid DowntimesCreating a Kubernetes migration strategy is similar to choosing a cloud migration strategy—the key to avoiding downtime when migrating applications is to act with awareness gradually. However, moving applications within a cloud-native architecture is not as simple as rehosting applications. There are a few key considerations to make to craft an effective Kubernetes migration strategy. Determine the Goal of the MigrationTo determine cloud migration goals, identify specific business drivers, and assess applications for migration based on priority. Cloud migration can yield all kinds of benefits, but some common goals of migration include increased speed and scalability of operations, better resources, lower costs, and improved customer service. In addition, for migrating to Kubernetes, it's essential to determine what should be modified - the application or the new environment. Thus, assessing the application for possible modifications, how it would benefit from Kubernetes, and the effort. Gather Information About Legacy ApplicationsWhen migrating applications, it's essential to take inventory of filesystems and network compatibility of existing applications. Any system migrating to a new cloud environment will host legacy applications of different values. Some of these applications will be worth retaining for the historical significance of their information, while others will need to be retired. Many applications can be modernized to perform more dynamically on the cloud and bring unique benefits to the cloud environment. Migrating these applications to the cloud can increase their speed and scalability and improve their intelligence and analytics. Individual legacy applications will likely need to be modernized differently, so each should be assessed to determine which cloud migration strategy will suit it best. Determine the Value of the MigrationIt's possible that after assessing the goal of the Kubernetes migration strategy and the compatibility of in-use applications, you determine that migration is not worth the effort. Coming to this conclusion requires a deep understanding of legacy applications and unpacking the data at hand. In addition, any cloud migration will involve some costs, so calculating these costs to determine the potential value of the migration is crucial. When preparing for migration, determine the cost of the migrated resources and evaluate eliminating expenses after migrating to the cloud. Before migrating, however, choose the potential value of legacy applications and what can be modernized or retired. An architecture like Kubernetes may not support some of these legacy applications, so knowing so beforehand will help minimize costs and maximize potential value down the line. Kubernetes is a powerful tool for modernizing applications and adopting cloud-native architecture to help some processes run more smoothly. Still, it's first essential to determine the feasibility of migration and whether or not the outcome is worth the effort. Many organizations have updated their systems with platforms like Kubernetes, but we're interested to hear what you have to say! Join more discussions just like this with the C2C Community.
Most customers I talk to today are excited about the opportunities modernizing their workloads in the cloud affords them. In particular, they are very interested in how they can leverage Kubernetes to speed up application deployment while increasing security. Additionally, they are happy to turn over some cluster management responsibilities to Google Cloud’s SREs so they can focus on solving core business challenges. However, moving VM-based applications to containers can present its own unique set of challenges: Assessing which applications are best suited for migration Figuring out what is actually running inside virtual machine Setting up ingress and egress for migrated applications Reconfiguring service discovery Adapting day 2 processes for patching and upgrading applications While those challenges may seem daunting, Google Cloud has a tool that can help you easily solve them in a few clicks. Migrate for Anthos helps automate the process of moving your applications - whether they are Linux or Windows - from various virtual machine environments to containers. There is even a specialized capability to migrate Websphere applications. Your source VMs can be running in GCP, AWS, Azure or VMware. Once the workload has been containerized, it can then be easily deployed to Kubernetes running in either a GKE or an Anthos cluster on GCP, AWS or VMware. Let’s walk through the migration process together and I will show you how Migrate for Anthos can help you easily and efficiently migrate virtual machines to containers The first step in any application migration journey is to determine which applications are suitable for migration. I always recommend picking a few low risk applications with a high probability of success. This allows your team to build knowledge and process while simultaneously establishing credibility. Migrate for Anthos has an application assessment component that will inspect the applications running inside your VM and provide guidance on the likelihood of success. There are different tools for Windows and Linux, and for Websphere applications we leverage tooling directly from IBM. After you’ve chosen a good migration candidate the next step is to perform the actual migration. Migrate for Anthos breaks this down into a couple of discrete steps. First, Migrate for Anthos will do a dry run where it inspects the virtual machine and determines what is actually running in the virtual machine. The artifact from this step is a migration plan in the form of a YAML file. Next, review the YAML file and adjust any settings you want to change. For instance, if you were migrating a database you would want to update the YAML file with the point in the file system to mount the persistent volume to hold the database’s data. After you’ve reviewed and adjusted the migration YAML, you can perform the actual migration. This process will create a couple of key artifacts. The first is a Docker container image. The second is the matching Dockerfile, and a Kubernetes deployment YAML that includes definitions for all the relevant primitives (services, pods, stateful sets, etc). The Docker image that is created is actually built using a multi-stage build leverating two different images. The first is the Migrate for Anthos runtime, the second includes the workload extracted from the source VM. This is important to understand as you plan Day 2 operations. This Dockerfile can be edited to update not only the underlying Migrate for Anthos runtime layer, but also the application components. And while not mandatory, you can easily manage all that through a CI/CD pipeline. If you want to ease complexity and accelerate your cloud migration journey, I highly recommend you check out Migrate for Anthos. Watch the videos I linked up above, and then get your hands on the keyboard and try out our Quiklab.
Dan Stuart, SVP of IT Services at Southwire Company, joined C2C on the virtual stage. Stuart shared how the 71-year-old manufacturing company set the bar in its industry by moving the mission-critical SAP ECC workload environment to Google Cloud.Key discussion points:What business problem were you trying to solve by moving to the cloud, and what aspects of cloud infrastructure are most important to you? How did you determine the right way to approach these challenges, and why was Google Cloud the solution to that? With the advent of cloud and Southwire’s move, how do you equip your team with cloud-related tools and skills? In what ways do you see Southwire Company taking advantage of other Google Cloud offerings in data, analytics, AI, ML, or industry-specific solutions for manufacturing? Having completed the project in July 2020 and now about a year post-migration, what has been the biggest payoff?Watch the entire conversation here: Want to learn more? Join us on May 26 for a technical overview with the Southwire team.
Known as a prominent programmer and entrepreneur in the tech space, Andi Gutmans today serves as the General Manager and VP of engineering for databases at Google Cloud. He is responsible for overseeing a group whose goal is to support customers with their data journeys and with transforming their businesses.“It’s a three-step journey,” he said. “We take them through migration, modernization, and then transformation. The best part of what we do is being able to innovate on behalf of our customers.”Innovating is something Gutmans does well. He co-created PHP, the programming language that is the most widely used web language for creating dynamic web pages, and he also co-founded Zend Technologies, which continues to do much of the work in further developing PHP. Gutmans doesn’t shy away from new challenges. He instead thrives on finding solutions for them. “All customers want to eventually get to transformation,” he said. “But it’s not always easy to make the full leap in one step. I’m excited about the opportunity to partner with them on that journey and to really enable that transformation.”Watch the whole interview below.
In 1999, Urs Hölzle joined Google as one of its first 10 employees and the first vice president of engineering. Twenty-one years later, he serves as the senior vice president for technical infrastructure and oversees the design, installation, and operation of the servers, networks, and data centers that power Google’s services. In sum, he is the person in charge of making all of Google’s wares available to developers around the world via Google Cloud.Watch the whole interview below.
Author’s Note: C2C Talks are an opportunity for C2C members to engage through shared experiences and lessons learned. Often there is a short presentation followed by an open discussion to determine best practices and key takeaways.Juan Carlos Escalante (JC) is a pioneering member of C2C and a vital part of the CTO office at Ipsos. Escalante details how he and his team handled data migration powered by Google Cloud and shares his current challenges, which may not be unlike anything you’re also facing. As a global leader in market research, Ipsos has offices in 90 countries and conducts research in more than 150 countries. So, to say its data architecture is challenging barely covers the complexity JC manages each day. “Our data architecture on our data pipeline challenges gets complex very quickly, especially for workloads dealing with multiple data sources, and what I describe as hyper-fragmented data delivery requirements,” he said in a recent C2C Talks: Data Migration and Modernization on December 10, 2020.So, how do they manage a seamless data flow? And how does JC’s data infrastructure landscape look? Hear below.What was the primary challenge? Even though the design JC described is popular and widely used in the space, it isn’t without its own set of challenges and siloed data infrastructure rises to the top.“The resilience of siloed data infrastructure platforms that we see scattered across the company translates to longer cycle times and more friction to pivot and react to changing business requirements,” he said. Hear JC explain the full challenge below. What resonates with you? Share it with us! How did you use Google Cloud as a solution? By leveraging Google Cloud, JC and his team have unlocked new opportunities to simplify how different groups come into a data infrastructure platform and serve or solve their specific needs.“We all have different products and services that we have available within Google Cloud Platform,” he said. “Very quickly, we've been able to test and deploy proofs of concept that have moved rapidly towards production.”Some examples of the benefits JC and his team have found by using the Google Cloud Platform product, BigQuery include: Reduced cycle time or processing time from 48 hours to seven minutes Data harmony across teamsHear JC explain how BigQuery helped reach these successful milestones. Since it's going so well, what's next? The goal is to think bigger and determine how JC and his team can transform end-to-end data platform architecture. “The next step we want to take in our data architecture journey is to bring design patterns that are common and are used widely in software development and bringing those patterns into our data engineering practices,” he said. On that list is version control for data pipelines—hear JC explain why. Also, JC is working with his team to plan for the future of data architecture and analytics on a global scale, which he says will be a multi-cloud environment. Hear him explain why below. Questions from the C2C Community 1. Are the business analysts running their daily job through the BigQuery interface? Or do they use a different application that's pulling from BigQuery?For JC’s organization, some teams got up to speed very quickly, while others need a little more coaching, so they’ll be putting together some custom development with Tableau. Hear JC’s full answer below. Hear how they use Google Sheets to manage the data exported from Big Query. 2. I have the feeling that my databases are way more similar than yours because my database is not talking about those things. It's just a handful of tables. So it's easier for us to monitor a handful of tables. But how do you monitor triggers?This question led to a more in-depth discussion, so JC offered to set up a time to discuss further separately, which is just one of the beautiful benefits of being a part of the C2C community. Check out what JC said to attack the question with some clarity below. We’ll update you with their progress as it becomes available! 3. What data visualization tools do JC and his team use?“Basically, the answer is we're using everything under the sun. We do have some Spotfire footprint, we have Tableau, we have Looker, and we have ClixSense. We also have custom development visualization developments,” he said.“My team is gravitating more towards Tableau, but we have to be mindful that whatever data architecture design we come up with, it has to be decoupled, flexible, and it has to be data engine and data visualization agnostic because we do get a request to support the next visualization,” he warned. Hear about how JC manages the overlap with Looker and Tableau and why he likes them both. Extra Credit JC and his team used the two articles from Thoughtworks, linked below, to inform their decision-making and what they used as a guide for modernizing their data architecture. He recommends checking them out. How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh by Zhamak Dehghani, Thoughtworks, May 2019 Data Mesh Principles and Logical Architecture by Zhamak Dehghani, Thoughtworks, December 2020 We want to hear from you! There is so much more to discuss, so connect with us and share your Google Cloud story. You might get featured in the next installment! Get in touch with Content Manager Sabina Bhasin at email@example.com if you’re interested.Rather chat with your peers? Join our C2C Connect chat rooms! Reach out to Director of Community Danny Pancratz at firstname.lastname@example.org.
This article was originally published on November 20, 2020.Hailed as one of the “Founding Fathers” of the internet for co-creating PHP, Andi Gutmans is just getting started. To discuss his new role at Google and the future of data, Gutmans joins C2C for a discussion in our sixth installment of our thought leadership series where we don’t hold back on both the fun and challenging questions. As a four-citizenship-holding and engineering powerhouse, Gutmans brings a global perspective to both tech and coffee creation.“I love making espresso and improving my latte art,” he mused. “I always say, if tech doesn’t work out for me, that’s where you’re going to find me.But, when he isn’t daydreaming about turning it all in to own a coffee shop and become a barista, he leads the operational database group as the GM and VP of engineering and databases at Google.“Our goal is building a strategy and vision that is very closely aligned with what our customers need,” he said. “Then, my organization works with customers to define what that road map looks like, deliver that, and then operate the most scalable, reliable, and secure service in the cloud.”It’s an enormous responsibility, but Gutmans and his team met the challenge to three steps: migration, modernization, and transformation. They accomplished this, even though they’ve never met in person—Gutmans started working at Google during the COVID-19 pandemic.Driven to support customers through their data journeys as they move to the cloud and transform their business, he digs into the how, the why, and more during the conversation, video above, but these are the five points you should know:Lift, Shift, TransformThe pandemic has changed the way everyone is doing business. For some, the change comes with accelerating the shift to the cloud, but Gutmans said most customers are taking a three-step journey into the cloud.“We’re seeing customers embrace this journey into the cloud,” he said. "They’re taking a three-step journey into the cloud. Migration, which is trying to lift and shift as quickly as possible, getting out of their data center. Then modernizing their workloads, taking more advantage of some of the cloud capabilities, and then completely transforming their business.”Migrating to the cloud allows customers to spend less time managing infrastructure and more time on innovating business problems. To keep the journey frictionless for customers, he and his team are working on a service called Cloud SQL. The service is a managed MySQL, PostgreSQL, and SQL server, for clarity. They also handle any regulatory requirements customers have in various geographies.“By handling the heavy lifting for customers, they have more bandwidth for innovation,” he said. “So the focus for us is making sure we’re building the most reliable service, the most secure service, and the most scalable service.”Gutmans described how Autotrader lifted and shifted into Google’s cloud SQL service and was able to increase deployment velocity by 140% year-over-year, he said. “So, there is an instant gratification aspect of moving into the cloud.”Another benefit of the cloud is auto-remediation, backups, and restoration. Still, the challenge is determining what stays to the edge and what goes into the cloud, and, of course, security. Gutmans said he wants to work with customers and understand their pain points and thought processes better.Modernizing sometimes requires moving customers off proprietary vendors and open-source-based databases, but the Gutmans team has a plan for that. By investing in partners, they can provide customers with assessments of their databases, more flexibility, and a cost reduction.Finally, when it comes to transformation, the pandemic has redefined the scope. A virtual-focused world is reshaping how customers are doing business, so that’s where a lot of Google’s cloud-native database investments have come in, such as Cloud Spanner, Cloud, BigQuery, and Firestore.“It's really exciting to see our customers make that journey,” he said. “Those kinds of transformative examples where we innovate, making scalability seamless, making systems that are reliable, making them globally accessible, we get to help customers, you know, build for [their] future,” he said. “And seeing those events be completely uneventful from an operational perspective is probably the most gratifying piece of innovating.”Gutmans adds that transformation isn’t limited to customers that have legacy data systems. Cloud-native companies may also need to re-architect, and Google can support those transformations, too.AI Is MaturingGartner stated that by 2022, 75% of all databases would be in the cloud, and that isn’t just because of the pandemic accelerating transformation. Instead, AI is maturing, and it is allowing companies to make intelligent, data-driven decisions.“It has always been an exciting space, but I think today is more exciting than ever,” Gutmans said. “In every industry right now, we’re seeing leaders emerge that have taken a digital-first approach, so it’s caused the rest of the industries to rethink their businesses.”Data Is Only Trustworthy if It’s SecureData is quickly becoming the most valuable asset organizations have. It can help make better business decisions and help you better understand your customer and what’s happening in your supply chain. Also, analyzing your data and leveraging historicals can help improve forecasting to better target specific audiences.But with all the tools improving data accessibility and portability, security is always a huge concern. But Gutmans’ team is also dedicated to keeping security at the fore.“We put a lot of emphasis on security—we make sure our customer’s data is always encrypted by default,” he said.Not only is the data encrypted, but there are tools available to decrypt with ease.“We want to make sure that not only can the data come up, [but] we also want to make it easy for customers to take the data wherever they need it,” Gutmans said.Even with the support through the tools Gutmans’ team is working to provide customers, the customer is central, and they have all the control.“We do everything we can to ensure that only customers can govern their data in the best possible way; we also make sure to give customers tight control,” he said.As security measures increase, new data applications are emerging, including fraud detection and the convergence of operational data and analytical systems. This intersection creates powerful marketing applications, leading to improved customer experience.“There are a lot of ways you can use data to create new capabilities in your business that can help drive opportunity and reduce risk,” Gutmans said.Leverage APIs Without Adding Complexity There are two kinds of APIs, as Gutmans sees it: administration API and then API for building applications.On the provisioning side, customers can leverage the DevOps culture and automate their test staging and production environments. On the application side, Gutmans suggests using the DevOps trend of automating infrastructure as code. He points to resources available here and here to provide background on how to do this.But when it comes to applications, his answer is more concise, “if the API doesn’t reduce complexity, then don’t use them.”“I don’t subscribe to the philosophy where, like, everything has to be an API, and if not...you’re making a mistake,” he added.He recommends focusing on where you can gain the most significant agility benefit to help your business get the job done.Final Words of WisdomGutmans paused and went back to the importance of teamwork and collaboration and offered this piece of advice:“Don’t treat people the way you want to be treated; treat people the way they want to be treated.”He also added that the journey is different for each customer. Just remember to “get your data strategy right.”
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.OK
Sorry, our virus scanner detected that this file isn't safe to download.OK