Learn | C2C Community

2Gather Chicago: Google Cloud Next ‘23 in Review

On October 10, hot on the heels of Google Cloud Next ‘23, customers, partners, and Googlers were invited to Google’s office in Chicago to recap critical takeaways from the event. This gathering allowed for a nuanced examination of the emerging technologies that had been discussed and gave partners and customers further ideas and implementations to build on.  A Special FocusCompared to Google Cloud Next ‘23, 2Gather was a smaller event that created a conversational atmosphere. While the event included the panels and fireside chats that attendees have come to expect, the more intimate setting encouraged more interaction and questions regarding some of the biggest news to come out of Next. This was especially true for those who were newer to events like Next. "This was a great opportunity to meet customers and get a recap of the San Francisco event I previously attended," said Kyle Wheat (@Kwheat17), a software engineer for CDW Amplified Education. "This was a great event as well because it felt like it had a more specific focus, and it was a little less overwhelming.” Google PanelAfter a brief welcome, the event started with a Google Panel moderated by Geoff Kramer, Head of Customer Engineering. Panel participants included:Eric Strader, Principal Architect, Google Fiona Egube-Waters, Cloud Customer Engineer, Google Rick Westrate, Principal Architect, Google Rikin Asher, Cloud Architect, GoogleThe panelists discussed the takeaways and announcements they found to be the most important coming out of Next ‘23, primarily surrounding the role that AI will play in organizations in the coming years and how AI can be used to take advantage of an organization's data.Panel members explained how there is a very low bar of entry to break into generative AI today. Because of this, there are still some “off-the-wall” solutions that organizations can find in its use, thereby increasing productivity across the board. For example, instead of writing endless painstaking lines of code, developers can point Google's LLM (PaLM 2) at databases. While AI is slowly being implemented by organizations across all spectrums, Google's Panel made one thing clear: it's here to stay. Every day AI makes data management easier (BitQuery and Salesforce even work together seamlessly now). In the next three to five years alone, AI is projected to help organizations increase their productivity by 20%.  Fireside Chat - LyticsAfter the panel, Google's Customer Engineer Manager Todd Sheetz (@tsheetz) sat down with Jascha Kaykas-Wolff (@kaykas), President of Lytics, to further discuss the relationship between AI and data management. Jascha stressed that the real acknowledgment for him coming out of Next ‘23 was the importance of thinking about how we structure and organize data and the critical importance of security when experimenting with AI. "In an age where predictive and generative AI technologies are transforming the very fabric of enterprise operations, it's imperative to underscore the significance of data security," Jascha said after the event. "As companies deploy these intelligent algorithms to forecast market trends, optimize supply chains, create content or even design new products, the data feeding into these systems is both voluminous and sensitive.” Jascha explained how business intelligence, proprietary algorithms, and customer data all represent an invaluable asset to an organization. "But it's a double-edged sword," he continued. “The very data that empowers us can be exploited if not rigorously safeguarded. So, when you're experimenting with predictive or generative AI, we must take a multi-layered approach to security."In today’s world, the security required to safeguard these assets goes far beyond encryption practices, though. According to Jascha, "it involves real-time monitoring, stringent access controls, regular security audits, and confidence that the hyperscaler you are working with can uphold your privacy posture. Failure to do so not only risks the integrity of the AI models but also jeopardizes the trust and compliance we have built with our stakeholders.” Customer Panel - Q&ATo round out the event, Geoff Kramer took the stage again to moderate a customer panel to focus on the customer journey. The panel included:Andy Goodfellow, CTO Zoro Gaganpreet Randhawa, Assistant VP, Enterprise Architecture, CNA Insurance Joel Vasallo (@jvasallo), Senior Director, TAG - The Aspen GroupThis panel was an excellent resource for other customers who are recently learning about Google Cloud and who want to know more about the capabilities they could use for their own organizations. More a community conversation than an informative panel, it gave attendees insight into wins and challenges that industries are facing when adopting AI for data management. There were two things that all panel members agreed on. First, any organization's data structure and management could always be better (especially when using AI). Second, it’s imperative when organizations are starting out (or restarting) their data management journeys to modernize and understand what they're building toward. Organizations should have a deep understanding of how data will be accessed, how it can be segmented, and which data is most important.  Fostering a CommunityThe conversations kept rolling after the customer panel during a happy hour, where customers and partners networked and discussed the ideas that were mentioned and expanded on during the panels. Whether during the Q&A or during the networking opportunities before and after the event, everyone got a chance to discuss their challenges, ideas, and solutions. With a focus on cultivating a community for both partners and customers, it’s crucial to ensure no one leaves an event with more questions than answers.When the audience can ask questions and network with panelists and other partners, it encourages idea generation around broader topics such as AI and BitQuery, two of the biggest topics of the night at this 2Gather event. "The main benefit to me is not only the opportunity to ask questions and engage with the panels,” Kyle explained, “there were also a lot of topics covered that we can go back and use in our own organization, like the BitQuery implementations.” 

Categories:AI and Machine LearningData AnalyticsSustainability

CarCast with Bruno Aziza: How To Create and Destroy Value with Generative AI

Hello C2C!When it comes Generative AI, Data really is your MOAT!  This week, we cover the latest in Generative AI research from the Boston Consulting Group (you might have read Matt Marshall's latest perspective on it here).  I also bring an "expert guest" to help use determine why Chief Data Officers are set up to fail.   Let's dive right in!How To Create and Destroy Value with Generative AI. According to Boston Consulting’s latest research, 90% of participants improved their performance when using generative AI for creative product innovation and in fact converged on a level of performance that was 40% higher than that of those working on the same task without Gen AI. HOWEVER, when participants used the technology for business problem solving, they performed 23% worse than those doing the task without GPT-4. Even participants who were warned about the possibility of wrong answers from the tool did not challenge its output. Bottom Line: Gen AI is a powerful LEVELER of performance BUT seems that people might mistrust the technology in areas where it can contribute massive value and to trust it too much in areas where the technology isn’t competent. How To Prioritize Generative AI use-cases: drawing from examples of great organizations (Wendy’s, Mayo Family Foundation, Walmart, Wayfair, Bloomberg…etc) and research from BCG, McKinsey and more, I unveil my “MT-CAC” acronym to select the right use-cases for Enterprise Gen AI applications. MT-CAC stands for Multi-Modal, Trusted, Current, Applied, Contextual.  In this LinkedIN Live, we also discuss why Data Quality is in fact your MOAT and how Generative AI execution is stuff between FOMO and FOMU right now. Are Data Leaders Set Up To Fail?!  A meager 20.6% of executives reported that a data culture had been established within their companies, down from the 28.3% of companies reporting having established a data culture back in 2019.  It doesn't seem we're making progress.  What's really happening?  My "special guest" explains!The CarCast also includes EXTRAS like: “The latest Cybersecurity MAP”,  “The future of Generative AI. In 15 charts” and a Marc Randolphism (Netflix co-founder) on what defines a company. What's your opinion on this week's topics? Leave your comment below and let's continue the conversationHave a great week!--Analytically yours,Bruno

Categories:AI and Machine Learning

C2C and Fulfilld Return to LinkedIn for a Live Stream with Uber

On October 9, 2023, C2C and Fulfilld are returning to LinkedIn for another live-streaming conversation about transforming digital businesses on Google Cloud. For this conversation, Fulfilld CTO and Co-Founder Michael Pytel (@mpytel) will be joined by Uber Engineer Ajay Thakur.Fulfilld is a Google digital native warehouse management orchestration platform incorporating Google Vertex AI. As a major disruptor in the warehouse management software space, Fulfilld is in contact with a range of enterprises who want to streamline their warehousing operations using tools like Fulfilld’s AI-powered spatial analysis functionality, custom digital assistant, and digital-twin. With Engineer Ajay Thakur’s help, Uber, the leading ridesharing and transportation network company, has recently integrated Google Cloud across its AI operations. Join us on October 9, 2023 at 4:00 p.m. EST on LinkedIn to share a special conversation between Michael and Ajay. No registration or unique link is required.C2C exists to bring companies of all sizes and across all industries together. That’s why we’re pleased to be partnering with Fulfilld, a major Google Cloud startup success story, for this series of conversations with Executive and C-Level representatives of enterprise Google Cloud customers. Join this event to experience this unique benefit of the Google Cloud customer community with us! Extra Credit:  

Categories:AI and Machine LearningC2C News

Turning Imagination into Reality: Catching Up with Trevor Marshall at Cloud Adoption Summit New York City

Trevor Marshall (@tmarshall), CTO of Current, first joined C2C in person at a 2Gather event in New York City as a guest of our partners at DoiT International. At that event, Trevor coined the unofficial Google Cloud marketing slogan “come for the credits, stay for the kubernetes,” and took home some ideas about C2C partner AMD’s compute instances. At the Cloud Adoption Summit in New York on September 12, 2023, Trevor returned for a customer panel and a fireside chat with DoiT Field Sales Manager Meryl Hayes (@merylhayes).After the event, we caught up with Trevor to see what he thought of AMD’s offerings, get his thoughts on the hysteria over generative AI, and look back on his past as a concert musician. Read on below for a short interview with Trevor, a full recording of his conversation with Meryl, and a fireside chat on security and compliance with Wayne White and Lawrence Chin (@LawrenceChin) of C2C partner Palo Alto Networks. At 2Gather: New York City, you spoke onstage with Michael Brzezinski from AMD. When we spoke afterward, you mentioned you had been considering the AMD-powered C2D compute instances for a proof of concept you were working on. Did you work with AMD on that at all? We tried spinning up some workloads. We have a very C2-heavy stack. Most of what we do is throughput-based. We’re not keeping stake in a lot of processes, so a lot of the compute optimization chips that we’re using are the best thing for what we’re looking for, and that’s most likely going to be the case going forward. Because it’s the C2 family, we get the benefit of any other underlying actual physical substrate coming through. and the benefit of those improvements. We’ll probably stick with that family over the next couple of years in case something else emerges. Did you see or hear anything at this event that applied to your work in a similar way? We’ve been PCI compliant for a couple of years. We’re getting our SOC 2 attestation this year, so standardized control frameworks regarding regulatory oversight and technical oversight have been top of mind. There was a nice presentation from Palo Alto Networks regarding some of what’s on the horizon when it comes to technical regulation, so it was good to see that. We’ll probably look at some of what they call harmonized frameworks, because there’s all sorts of ways of looking at basically the same technical control, and so we’ve taken that approach in the past and we’ll probably just revisit that. Especially now that we’re attesting to multiple certifications. That was cool to see and get more information about. At 2Gather: NYC, you mentioned the tendency of some coders to get carried away experimenting with a new “sweet tool.” Do you have any thoughts about how to manage that tendency when it comes to generative AI? I think that there’s quite a lot of moral hazard that’s emerging. It’s so easy to now get auto-generated code through Copilot or through other products that are emerging. No doubt, if you are an efficient developer, you can leverage that type of technology to become more efficient, produce more code, and things like that, but it’s going to lead pretty quickly to an abstraction and a disconnect away from the actual business logic itself, where something goes wrong in production, you don’t really know what’s happening, and you’re probably going to produce less efficient code.Now, maybe some of these autosuggestions get so much better than the human that eventually they take care of it, but you always end up––and this is the nature of our systems––when we think about ourselves as engineers, we don’t really hold ourselves to the same high standards that a mechanical engineer would where the tolerance is zero percent for failure. Software engineering has always had this built-in, “hey. some things are going to go wrong, but we’ll have incident response and we’ll make sure that we’re operating as correctly as possible.” Especially for a company like us, where we’re in financial services, reliability is super important for us. As these gen-AI code production things come into production, and that “sweet-tooling” of, “oh, look at this sweet intelligent plug-in that now writes half my code,” I’d just keep an eye over the next couple of years on some postmortems that emerge from code that was not written by a human. I read that you used to be a musician. What was your instrument? I played orchestral percussion and I wrote music. Did anything from your music background carry over to what you do in technology? Two big things came for me from my music background. The first was discipline. I was very hardcore, conservative. I was at the Juilliard precollege. Every Saturday from 8 a.m. to 8 p.m., I’m doing music, and then every day besides that I’m practicing 5 hours a day. I wasn’t big into sports. I wasn’t that big into education at the time. Knowing what it took to be exceptional in a true meritocracy, which until very recently, especially in classical music, it’s still pretty much a complete meritocracy. There are some political navigations around that, but for the most part, you can’t fake being a great musician, because everyone ends up finding out in some way or another. Even if you look at the hyper-produced artist, what they have is beyond music, as in image. There’s an artistic integrity generally that’s really hard to fake, so that discipline was really important.The second thing was the creativity with writing music. Actually, a lot of the abstractions that would come up in writing music with metric modulation, those immediately had applications into algebra when I started studying math in college, which is the direction I took once I went to undergrad. It tickles the same part of the brain. So I think those were the two big things: the ability to imagine something and turn that into reality through creativity––you have to completely understand what you’re trying to produce when you’re writing music––and then the discipline of actually being able to produce it and get it to that place that you’re thinking of. DoiT and Current at Cloud Adoption Summit New York City:  Palo Alto Networks at Cloud Adoption Summit New York City:  Extra Credit:  

Categories:AI and Machine LearningComputeIdentity and Security

Introducing C2C Live on LinkedIn with Fulfilld and Cintas

Greetings C2C members! Today we’re excited to let you all know about a new content series C2C will be piloting later this Summer: a live interview series hosted on our LinkedIn account, featuring interviews with enterprise Google Cloud customers, hosted by Fulfilld’s CTO and Co-Founder Michael Pytel (@mpytel).  Fulfilld is a Google digital native warehouse management orchestration platform incorporating Google Vertex AI.  As a major disruptor in the warehouse management software space, Fulfilld is in contact with a range of enterprises who want to streamline their warehousing operations using tools like Fulfilld’s AI-powered spatial analysis functionality, custom digital assistant, and digital-twin. How this all happens, and how Google Cloud makes it all possible, is more than worth breaking down with an in-depth discussion.Join us on August 23, 2023 at 1:00 p.m. EST on LinkedIn to share a special conversation between Michael and Matt Hough, Cintas Vice President and Chief Information Officer. Cintas––a Fortune 500 leader––leverages Google as a strategic component within their complex landscape. C2C exists to bring companies of all sizes and across all industries together. That’s why we’re pleased to be partnering with Fulfilld, a major Google Cloud startup success story, for this series of conversations with Executive and C-Level representatives of enterprise Google Cloud customers. No registration or unique link is required; in fact, if you follow C2C on LinkedIn, you’ll get a notification as soon as the stream begins.In the meantime, take a look at some of our previous content featuring Fulfilld to get a preview of what you can expect:  

Categories:AI and Machine LearningApplication Development

A FAIR Approach to Generative AI: An Interview with Rackspace CTO Srini Koushik

Generative AI is taking the world by storm. Whether it sees it as an opportunity as a threat, every organization, in the tech world and beyond, is preparing in some way for the next phase of the generative AI boom. With speculation running wild, what most organizations need now is a clear-headed approach to using this technology effectively and sustainably. C2C partners Rackspace are providing just that. The Foundry for Generative AI by Rackspace is the company’s new suite of services that enables customers to implement AI with the right systems in place and the right values in mind. To learn more about this solution and what it can offer, C2C sat down with Rackspace CTO Srini Koushik for a long and wide-ranging discussion about the risks and the possibilities of “releasing AI into the wild.” Tell us about your background and your role at Rackspace. I’m Srini Koushik. I’ve been at Rackspace for about eighteen months. I’m the chief technology officer. I’ve got almost forty years in this industry. I got my start back in 1987 in India. When I did my master’s degree thesis, it was on artificial intelligence, and it was all in this area called “frames,” which was all about giving context to the AI you’re building. At that time, it was very rudimentary. ’87 was a long, long time ago, but when you fast-forward and see how these incremental developments have built on one another over the past 40 years, I feel as excited as a twenty-something entering this new age of AI. What is FAIR? Most people, if they’re in technology, have heard of Generative AI. One clear thing about generative AI is that it’s real and here to stay. If you go back and look at Rackspace’s 25-year history, we’ve been there whenever there’s been a technology shift where our customers needed help. At the start of the century, it was managed hosting when the web was taking off, and later the first public cloud with OpenStack when Cloud Computing emerged in 2009. We think Generative AI is such a massive opportunity that we must be here for our customers. We’ve been working with Generative AI for Rackspace’s internal applications since the start of 2023, and we feel we have compelling offerings that we are ready to bring to the market. FAIR is the global practice that we’ve set up to be able to co-create these solutions with our customers.A little bit about the name: we chose the word FAIR because it stands for Foundry for Generative AI by Rackspace, but what we liked about the name were a few things: number one, the word foundry. If you go back to the Industrial Revolution, the foundry was where you brought raw materials together with machinery and had skilled professionals create things that had value to customers. It’s precisely where we are right now. We have the materials, which is data; we have the machinery, the large language models, and skilled practitioners, our Rackers coming together to develop AI-powered solutions that are valuable to our customers.At this point, many Service companies in our industry have discussed the hundreds of use cases they’ve all identified. We have focused on converting those ideas into reality. So that’s what FAIR is. The other reason why we loved the name FAIR was that it’s a guiding principle for us to focus on the responsible and sustainable adoption of AI. This isn’t AI for AI’s sake, but it is about a responsible approach to AI that’s equitable to people, is secure, protects privacy and intellectual rights, and does so in a way that consumes the planet’s resources in such a way that we promote sustainability. What does it mean to use AI fairly and sustainably, and how does FAIR accomplish that? We decided to take many of our internal systems to Google Cloud three years ago. That was before my time, but when I got in, I was very pleased that that was the platform they picked. I’m certified as an architect on all three hyper scalers, but the first one I got certified on was Google Cloud. Google has been a leader with its stance on sustainability and its approach to open source, and these were the same core values that Rackspace was built on – so it was a great fit.The IT function within Rackspace reports to me, and so being on Google Cloud ourselves gave us an opportunity to be a pioneer with Generative AI. We’ve been a preview program for many of the products that Google has released, and it allows us to learn by doing and building solutions that help our business. We have had to learn how to build these solutions, select the appropriate large language model, tune the model, and secure and protect the privacy of data. As a mid-sized global organization, we also had to learn how to do these things frugally.People ask me about sustainability, and I don’t say it lightly, but I say, “The only green technology is the one you don’t use.” Anything that you use is going to consume electricity, consume resources. However, suppose you are very responsible about how you consume it and pay attention to that as a non-functional requirement of any solution you’re building. In that case, you’re going to end up reaping the benefits of that solution.Rackspace Intelligent Co-pilot for the Enterprise (ICE) is one of the first solutions we’re rolling out, and if we’re going to deploy Rackspace ICE, and we know what it looks like when we deploy it to twenty people, we know the best way for us to take it and deploy it to a thousand people across the globe. Where do you deploy the models, and in what Google Cloud regions? How do you tie it to clean energy? We’re not only producing the outcomes we’re looking for, but we’re also trying to make sustainability a business outcome, and that’s critical. What are some key use cases for FAIR? How can Google Cloud customers use it? How are Rackspace customers already using it? We started with these cross-domain use cases. We had two solutions that we started with. One is called RITA (Rackspace Intelligent Technology Assistant), and the other is Rackspace Intelligent Co-pilot for the Enterprise (Rackspace ICE). RITA is precisely what it sounds like. It’s an intelligent chatbot that uses intent-driven automation to automate and simplify the provisioning IT Services within Rackspace. Rackspace IT doesn’t have anyone answering the phones anymore. All the level-one support is done through automation, and then the second-level support goes to our engineers. It’s been very helpful because RITA automates the toil, freeing up our engineers to step in and become problem solvers. This is a case where AI is not replacing people but giving them an opportunity to move up in their careers. As Google Cloud continues to enhance its products, it opens up many new possibilities for us – for example, we can leverage language translation to make RITA multi-lingual, so Rackers across the globe can converse with her in their native language.The other use case, Rackspace ICE, is essentially what Google Cloud calls enterprise search. It’s, “How do you take these islands of information that sit within an enterprise and start connecting and correlating them, and expand access to this wealth of context-rich information through a friendly natural language interface so that you start to unlock solutions you didn’t even know existed?” “The emergence of Generative AI is not unlike the invention of the Gutenberg press.” The minute we start showing those capabilities, you start unlocking the possibilities in other places. I spent time with our chief legal officer yesterday, and he asked, “Can we go search our contracts? I’ve got to be able to do the same thing. I want our lawyers to focus on being lawyers and not spend the majority of their time looking for information that is relevant to what they are working on.” You can imagine that within any enterprise, so many of these areas are underinvested over the years, and they’ve grown up as silos: HR, finance, legal, and marketing. We can see Rackspace ICE solving these problems in all of these domains.Those two use cases are essential for making us more effective, every one of those applies to every customer I go to, and it goes to any customer we have. As we work with our customers, we can address challenges from a position of experience as we have dealt with challenges that our customers are likely to encounter in their journey – cloud platform setup, securing AI, Security controls and privacy controls, policies, guardrails, and governance.While Google Cloud has made the technology much easier, implementing it within an enterprise is much more involved. We’ve been advising companies on how to do that. Three months ago, we created a generative AI policy that governs the responsible use of AI within Rackspace. Now we’re applying the policy as we create these solutions, and we’re finding out that it was a good start, but we probably must continue adding more things. This is the learning process, so our customers can benefit from all our work in each of these domains. A new technology emerges every year. Why a foundry for generative AI? There’s a technology every six months these days, not even every year, but we think the emergence of Generative AI is not unlike the invention of the Gutenberg press. The invention of the Gutenberg press revolutionized the world by transforming the way ideas were communicated and knowledge was disseminated. With the movable type and mechanized printing, the press made books more accessible, accelerating the spread of information. This breakthrough democratized knowledge, fueling the Renaissance, Reformation, and Enlightenment, ultimately shaping the course of human history. Just as the Gutenberg press disrupted the dissemination of knowledge, Generative AI is redefining how we create and interact with information. Like the press, Generative AI will reshape industries, foster new ideas, and democratize artistic expression, opening doors to a future limited only by our imagination.With FAIR, we cut through all the complexities of AI and aim to make working with Generative AI easy for our customers. FAIR does three things: ideate, incubate, and industrialize. In the ideation phase, we’re trying to determine how desirable AI is to your organization. How ready are you as an organization for the advent of AI? Do you have the right policies, governance, and guardrails? We start with the database of use cases and work with customers to determine: which use cases apply to them? Which one is the first one you need to work on? And does the customer have access to the data they need to get started?In the incubation phase, we move from establishing the desirability of AI to determining whether it’s feasible to implement the use case in the organization. You may want to do this, but if you don’t have all the data or if you don’t have the skills, you’re going to run into different constraints. Feasibility is all about trying to identify those constraints and figure out how you would overcome those constraints. At the end of this incubate phase, you have something that you can take to the board. You can demonstrate based on your data and get the buy-in of the board and the leadership to be able to drive this forward.The last step in our approach is the Industrialize phase, I call this phase “releasing AI into the wild.” In the Incubate phase, the solution was available to a handful of people. Still, if you want to release it to your entire organization, you need to build new processes and techniques to manage and govern AI to ensure the desired outcomes.We’re working with our customers to co-create that journey for them and do that iteratively, and Google Cloud has allowed us to do this with the innovative products they are releasing at a breakneck pace. I’m excited about it; I go to bed, and when I wake up, they’ve released something new, and those products open up different solutions that we can co-create with our customers. We’re thrilled to be a Google Cloud partner with generative AI and data, and as we move forward and get our customers through the incubation phase, you’ll see a flurry of customer testimonials from FAIR. Extra Credit:  

Categories:AI and Machine Learning

2023 C2C Member Pulse: The Role of Google Cloud in Becoming Cloud-First

In March 2023, we fielded the C2C Member Pulse to observe trends, skill gaps, and challenges in Google Cloud and how our members use community to enrich their cloud experience. This infographic highlights a portion of the findings. As a valued member of the Google Cloud customer community, we understand that each of you is on a unique journey with Google Cloud. Some of you may be well-versed and advanced in your Google Cloud usage, some are exploring and building business cases, and others may be anywhere in the middle of that spectrum. We recognize that not all cloud investments are the same, considering the wide range of products and services offered by Google Cloud.We also understand that respondents’ overall level of investment in Google Cloud—whether their own choice or their company’s choice—changes to meet digital transformation needs. These findings shed light on the investments made, talent needs, and solution usage patterns within our community, which helps us prioritize how to grow our network, foster connections, and curate content in our community. Investments in Google Cloud The survey results indicate that the majority of respondents, regardless of organizational size, are making increased investments in Google Cloud. Overall, respondents are either maintaining or expanding their investments in Google Cloud. The primary reason for increased investments is the integration and consolidation of systems on the Google Cloud platform, closing in on a digital-native or cloud-first approach. On the other hand, some respondents have reported decreased investments due to transitioning to other cloud vendors, with Amazon Web Services (AWS) and Azure being the most commonly mentioned alternatives. It's worth noting that a percentage (21%) of respondents were unsure about the changes in their investment levels, possibly indicating limited involvement in the decision-making process. Google Cloud Talent As organizations increase their investments in Google Cloud, it becomes evident that there is a growing need for skilled Google Cloud talent. Interestingly, the most common way organizations find skilled talent is by having their permanent staff learn Google Cloud skills on the job. This was also consistent across all organization sizes—no matter how many people may be available to work on these products, they’ll all be learning as they go, emphasizing the ongoing learning and development required for effectively working with Google Cloud products. Solution Area Usage Among the technologies respondents use most, identity and security, as well as data cloud products, take the lead. This aligns with the challenges faced in data maintenance and integration, highlighting the significance of these solution areas.Although AI and ML are among the least currently used technologies, they are the top Google Cloud products that respondents are planning to use at their organizations. This reflects how AI will be increasingly sought-after in the future. AI-enabled software was also noted as a top technology that will be needed for future digital transformation efforts.Respondents are least aware of startup and SMB solutions from Google Cloud. More guidance may be needed to support this solution, as it is the solution most respondents have no plans to use (16%). That said, many more respondents who belong to organizations that have been active for less than one year are currently using Startup and SMB solutions (48%) with 87% awareness and 23% planning to use. Visualizing It All Take a look at how the community ranks the options they were given for Google Cloud investment levels, nurturing Google Cloud talent, and solution area usage. View image as a full-scale PDF here. Get the Most of Community As members of our community, your voices shape our initiatives. While we learned a lot about you collectively in this survey, we also want to meet your needs as an individual. Take a moment to update your profile today, and ensure you're receiving the most relevant and valuable content tailored specifically for your interests. Update Your Preferences

Categories:AI and Machine LearningIdentity and SecurityC2C NewsInfographicResearch

Key Takeaways from 2Gather Los Angeles: The Future is Now, Security and AI

Lytics, Wpromote, Google Cloud. 2Gather Los Angeles  June 6th 2023  Buzz Hays, Global Lead Entertainment Industry Solutions and Iman Ghanizada @iman, Global Head of Autonomic Security at Google Cloud opened the event by discussing that the purpose of AI is to improve what people are already doing. Whether they are writers or animators in a designated industry, AI aims to enhance the paintbrush for an artist. With trying to provide businesses with better tools, many questions surrounding security and data arose. One major question was regarding how to collect effective data that would result in projects using AI. A primary example that was discussed during the event was the entertainment industry. Many applications of AI within this industry need a sufficient amount of customer data to be developed. For the entertainment industry, identifying ad breaks and suggested content for streaming platforms are examples of AI use cases. Jascha Kaykas-Wolff from Lytics stated that mature organizations can adapt to data pipelines. Working across different departments makes the decision making process a lot easier, because it demonstrates how data is useful to certain parts of the organization. Paul Dumois , the CTO of Wpromote, also stated that businesses need to focus on specific problems to solve and retrieve data that will be helpful in providing solutions to these issues. Overall, the discussions between the panel and the audience highlighted that AI has many moving parts and trends. An organization should focus on a specific area and start with a singular project to learn about the challenges and dynamics of working with AI in real time. Additionally, analyzing the core metrics of a business and receiving top-down support can help to utilize resources when setting up projects or tasks associated with AI.   

Categories:AI and Machine LearningData AnalyticsIdentity and SecurityDatabases

2Gather Core Concepts: Let’s Talk Tech Atlanta

In April, C2C hosted its first 2Gather event in the Atlanta, Georgia area, a unique installment of the Let’s Talk Tech series, cohosted with Google Cloud and featuring Google Principal Developer Advocate Kelsey Hightower. Kelsey joined a roster of distinguished speakers including Alex Barnes, Head of Infrastructure at Calendly, Russ Ayres, Senior Vice President of Security Architecture and Engineering at Equifax, and Rae Williams, Director of Customer Engineering at Google, for a full program of conversations tackling some of the biggest issues facing the cloud technology space today. Read on below for a roundup of some of the terms, products, and themes the conversations covered.  Open source: Any software whose source code is freely available to all users for collaborative review. Russ and Alex are both sympathizers with open source philosophy, and Kelsey is a veteran of the open source community. In his words, “When you go out into the open-source world, you get to define technology for the world.”Serverless: A cloud computing model allowing customers to access resources allocated by providers on demand without having to interact with a server, which Kelsey called “The final chapter of a mature compute pattern.”Platform engineering: Streamlining infrastructure to make tools and services immediately available to teams for operation and use. According to Alex, platform engineering allows an organization to “present all capabilities as things you can consume programmatically to unlock velocity.”Configuration management: The process for ensuring that products and systems maintain the same design and perform the same way. Russ contrasted configuration management from policy with the mantra, “If it was appropriately written it would be following the policy.”   KubernetesGoogle’s open-source containerization solution for software deployment and scaling has been setting standards and unlocking potential in the cloud space for nearly a decade. Kelsey’s history with Kubernetes is well known, and Rae made sure to capitalize on the opportunity to press him for his comments in front of a live audience. For context, Kelsey compared Kubernetes to contemporary projects in DevOps, insisting that DevOps engineers too often miss the forest for the trees. “What Kubernetes represents is a ten-year-old pattern for assigning software to servers,” he said. “It’s the system you would build if you had the big picture.” CalendlyCalendly is a business communication platform offering unique scheduling solutions for organizations looking to maximize efficiency. Head of Infrastructure Alex Barnes joined Aiven Vice President of Product Marketing Amy Krishnamohan (@amy.km) for a fireside chat about the company’s journey on Google Cloud. Calendly was initially built on Ruby on Rails, but later wanted to invest in Kubernetes. Google was an obvious choice for a hosting solution, especially since calendar management is so essential to the company’s offerings. “Google Calendar is a massive part of what we interact with,” Alex explained. “How better to build on that relationship than to build on their platform?”Alex expanded on these comments via email after event. “The partnership with Aiven and Google Cloud has allowed us to build a robust cloud data infrastructure that can handle the demands of our users,” he told C2C. “And it has given us the flexibility to scale up quickly, without having to worry about managing the underlying infrastructure.”  AIPredictably, many members of the audience were excited to ask questions about recent developments in AI. Kelsey was quick to dispel any notions that AI is going to make tech professionals irrelevant or push them out of the market. Instead, he said, AI is providing a new baseline for innovation. When looking at generative AI solutions like ChatGPT, Kelsey said, users and developers should ask, “What is the dataset, and where is that dataset created?” Tech practitioners can still develop new solutions beyond the limits of the datasets used to train these products. “If ChatGPT can generate the code,” Kelsey told the audience, “It means we need a different interface.” SecuritySecurity was the main focus of Russ’s comments during the customer panel with Alex and Kelsey. As Senior Vice President of Security Architecture and Engineering at Equifax, Russ is responsible for securing highly sensitive personal data at one of the leading financial services organizations. However, Russ originally came to security as a developer. Coming from that background, Russ believes that “Good security is good engineering,” and vice versa. “Most security solutions try to be everything to everyone,” he added. From his point of view, though, teams should aim to build with purpose on the front end so that solutions are designed to run efficiently and at scale with minimal risk. Hear from more industry leaders on these topics and join the conversation in person at our upcoming 2Gather event in Los Angeles: 2Gather Los Angeles: The Future is Now, Security and AI

Categories:AI and Machine LearningIdentity and Security

5 Cloud Trends to Track in 2023

The following article was written by C2C Global President Josh Berman (@josh.berman) as a member exclusive for TechCrunch. The original article is available here. In many ways, 2022 was a year of growth for the cloud technology space. Unpredictable macroeconomic developments saw many organizations thinking about and preparing for greater wins in the years to come instead of right away.In 2023, much of this preparation could come to fruition as the growth achieved in 2022 contributes to a stronger economy and rapid advancements, particularly in tech.Global IT spending is projected to climb by 5.1% to $4.6 trillion in 2023, according to Gartner, driven by a 11.3% increase in investments in cloud applications to $879.62 billion. What does this kind of increased spending and investment mean for organizations? C2C Global, a Google Cloud customer community, has identified five cloud trends to watch in 2023. “Moving forward, custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm.” AI and ML tech adoption will rise Every organization wants to harness the many and varied capabilities of AI and ML technology. Some want to use their data to enhance analytics and build predictive models, and others want to automate repeatable processes.Currently, many AI and ML models require extensive testing and training before they can be implemented at scale across large organizations hosting petabytes of data or serving wide customer bases. In fact, C2C’s research has found that only 47% of respondents are currently using AI and ML. However, these technologies ranked high among the ones that respondents hope to adopt in the future.The promise of these technologies is too significant to ignore. As models are refined, and training and testing become more reliable and automatic, organizations will come to rely on these technologies more. We’ll see more low-code/no-code app development platforms Partly due to the rush to adopt AI and ML technologies that still require a lot of maintenance to perform reliably at scale, development teams are likely to implement low-code and no-code applications to reap the benefits of these technologies without the burden.For skilled developers, low-code and no-code options promise a lower barrier to entry for introducing and managing complex models. Significant savings in terms of time and cost, as always, will also be a massive draw. More organizations will host resources in multicloud environments Every cloud strategy requires delicate analysis to determine the proper balance of cost, efficiency, performance, scalability and security. For a lot of organizations, sticking with a major cloud provider promises attractive savings that make a lot of practical sense.However, as cloud technology grows, individual products will be just as attractive to companies prioritizing scaling and transformation. Moving forward, even for companies using one cloud provider, adopting and implementing new resources from other providers may add value, and custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm. Remote work tools will continue to improve While remote work emerged during the pandemic as an emergency measure, the tools developed to accommodate it are now available as part of the expanded landscape of hybrid work technology. As AR and VR technology become more viable, organizations will continue to introduce and adopt new means of building a work environment that suits the needs of a diverse and changing workforce. Cloud adoption will increase in formerly resistant sectors Until recently, organizations in the government and financial services used to resist transformation due to the risk and burden of retiring entrenched legacy systems and migrating massive amounts of data. Lately, though, the advantages of cloud adoption have been harder to ignore, and more organizations in these industries are adapting accordingly.For example, the U.S. Army recently said it would start using Google Workspace for its personnel operations. This expansion into previously less served areas of the cloud market speaks volumes for cloud adoption.

Categories:AI and Machine LearningApplication DevelopmentHybrid and Multicloud

Thinking Differently About Automation at 2Gather: NYC

On November 10, 2022, C2C returned to Google’s offices in Chelsea, Manhattan for a 2Gather event all about intelligent automation. The robust event program included a fireside chat with representatives of Granite and Becton, Dickinson, and Company moderated by C2C partner Automation Anywhere, a presentation from partner Palo Alto Networks, a conversation between partner Workspot and their customer MSC, and a panel featuring the speakers from MSC, Workspot, BD, and Granite. Google’s Drew Hodun introduced and moderated the event program, but the majority of the content was driven by the participating customers and partners and the guests in attendance with questions and ideas to share with the speakers and with one another.After a hello and a word on C2C from director of partnerships Marcy Young (@Marcy.Young) and an opening address from Drew, Ben Wiley of Automation Anywhere introduced Paul Kostas of Granite and Nabin Patro of BD. and offered some background about Automation Anywhere’s mission to build digital workforces for organizations that need them, with a particular focus on business processes like data entry, copy and paste, and parsing emails. Ben also mentioned Automation Anywhere and Google Cloud’s joint solutions for office departments like contact centers. Paul made a point of shouting out solutions like AA’s Automation 360 and Google Cloud’s Doc AI, which Granite used to build 80 automations in 9 months, and Nabin touched on how automation helped manage some of the work that went into BD’s manufactured rapid diagnostic test kit for COVID-19. “The technology is forcing us to think differently.” Next, Akhil Cherukupally, and David Onwukwe of Palo Alto Networks took the stage to walk through some of the technical components of the security platforms the company offers organizations navigating the cloud adoption process. Then Workspot’s Olga Lykova (@OlgaLykovaMBA) brought up Google Enterprise Account Executive Herman Matfes and Dung La and Angelo D’Aulisa of MSC for a look back through the history of the companies’ work together. Olga started things off with an origin story about the Citrix leaders who left their company to start a cloud-hosted platform with Workspot, which turned out to be a superior business model. Then she turned to the other guests to explore how Workspot helped MSC build automations on the front end of their business processes and ultimately implement these automations end to end.Speaker Panel at 2Gather: New York CityFinally, Drew, Angelo, Dung, Paul, and Nabin returned to the stage for a panel discussion breaking down all of the issues raised during the previous sessions. A question from Drew about how each organization’s work has impacted its customers prompted Paul to go long on the benefits of Granite’s services. When Angelo gently added, “We’re a Granite customer,” the audience laughed along with the panelists. “Thank you for being a customer,” Paul said. Drew also asked the group about what’s coming next at each company. The answers ranged from the concrete to the philosophical. “The technology is forcing us to think differently,” Nabin observed. In response to a question from a guest in the audience, Paul acknowledged the human impact of automation and stressed the importance of getting people to feel good about automating processes rather than fearing for the future of their jobs.As usual, the conversations did not stop here. The speakers and guests continued to share ideas and brainstorm solutions into the networking reception and even the informal dinner that followed, where Clair Hur (@write2clair) of Vimeo stopped by to explain how the company is cutting costs significantly after migrating from AWS to Google Cloud. More of these stories will be collected in our upcoming monthly recap post. For now, watch the full recording of the New York event here:  Extra Credit:  

Categories:AI and Machine LearningCloud OperationsSession Recording

Introducing Intelligent Automation Everywhere With Shalini Mayor, Salesforce Senior Director of Enterprise Automation

Before Shalini Mayor (@smayor) brought her background in automation to leadership roles in the private sector, she “almost became an astronaut.” As a subcontractor to NASA’s Langley Research Center, Shalini worked on various coding and algorithm development projects. She may have moved on from NASA before experiencing space travel, but much of her work as a director of Enterprise Automation at Salesforce is not unlike observing Earth from a distance. “With the explosive growth that you’ve seen at Salesforce, it’s very easy to get disconnected,” she told the crowd at a C2C 2Gather event in Sunnyvale, California. “Everything runs as a little startup within itself…when I was brought in, my primary role was to bring some structure to this madness.”“Since then we’ve been scaling out,” she continued, “trying to figure out ‘where do we have the most repetitive processes?’” Finance, HR, and IT operations are the major sites of repetitive processes at Salesforce, according to Shalini. What does it take to automate processes in so many different areas at a company with over 77,000 employees? In Shalini’s opinion, it takes more than just robotic process automation (RPA). “What we’re looking at really is a business process end to end,” she told Sunnyvale. “RPA is a small part of it. What about the rest of it? How do we reduce manual intervention in any process? How do we actually take that away so that it will just run?”To answer these questions, Shalini is thinking beyond the scope of the automation currently adopted at most organizations, sometimes back to the math and science she studied in graduate school as the basis for her education in AI and ML. “Anything that you look at all the way back down to the rudiment, it’s still exactly the same,” she said in an interview after the Sunnyvale event. Even though the extent of what’s possible with automation today is “mind-boggling,” the automations themselves are still based on the same linear algebra as the first AI and ML models Shalini encountered as a student. For Shalini, thinking about the foundations of automation makes it possible to look beyond RPA bots and straightforward rule-based models, incorporating approaches like decisioning and illuminating new opportunities. “What we’re looking at really is a business process end to end.” At Salesforce, these new possibilities include Natural Language Processing (NLP) and Natural Language Understanding (NLU) technologies like Google Cloud’s Document AI and other solutions in high demand at the company’s contact centers, which Shalini sees as high-priority contexts for automation use cases. Despite her enthusiasm for automation, however, Shalini is careful not to forget the human factor of workplace processes. She is not interested in reducing or combining job roles, as some workers fear executives may plan to do with automation in place. “If I can take some of these mundane tasks off people’s lists,” she told Sunnyvale, “that’s where the growth comes in.”This human factor is also what Shalini recognizes as the value of a customer community and open spaces for peer-to-peer discussion like C2C’s events. At Sunnyvale, she particularly appreciated “the fact that I could speak with so many people and help them learn something” and “learning that people are facing similar issues.” on November 10, 2022, Shalini’s colleague at C2C Partner Automation Anywhere, Vice President of Commercial Sales Ben Wiley, will appear alongside a diverse panel of guests to elaborate on some of what Shalini discussed in Sunnyvale, face-to-face, with a fresh group of Google Cloud customers and partners looking to automation to solve their business problems. To join them, use this link to register today. Extra Credit:  

Categories:AI and Machine LearningGoogle Cloud Partners

Connecting Across Career Journeys at 2Gather: Chicago

When Meiling He, Senior Data Scientist at Rockwell Automation, was asked at the last minute to fill in for her manager, Francisco Maturana (@maturanafp), at 2Gather: Chicago, she had never heard of C2C Global. The next day, she was on a train from Milwaukee preparing to speak at the Google Cloud Customer Community’s first face-to-face event in the Midwestern US. “Yesterday was the first time I heard about this, at around 3:00 p.m.,” she said. “It was new, but my manager sent me the information about what questions would be asked, and he did have his preparation for the event, so I got the information I needed.”From left: Lilah Jones, Paul Lewis,Meiling He, and Vrinda KhurjekarMeiling presented alongside Pythian CTO Paul Lewis, who spoke to C2C in advance of the event about how the company prepares data sets to be used for a variety of AI and ML solutions, and Vrinda Khurjekar, Senior Director of AMER Business at Searce. The panel discussion, moderated by Google Head of ISV’s and Marketplace Sales Lilah Jones, explored how businesses can use AI and ML solutions in general to get the most value out of their cloud adoption. Even though she had had so little time to prepare for it, Meiling’s experience at the event was a pleasant surprise: “I think it was so fun. I learned a lot from the perspective, the questions, the answers. It’s so nice to be around people like Lilah and Paul. They’re so knowledgeable and outgoing.”Meiling was also pleasantly surprised to be able to make her own connections following the scheduled program with other customers in attendance. She appreciated having the chance to talk shop with a fellow data practitioner, Revantage Data Engineer Trevor Harris. Many of the other guests in attendance were satisfied with the opportunity to network as well. “It’s a great place to connect with other professionals, business and also technical, and it’s a really wonderful experience,” said Henry Post of US Bank. “Great food, great presentation, and great people.” Jeff Parrish (@Jeff P) of Redis agreed. “I thought it was excellent,” he said. “It was a good flow, good panel, good interaction, and a good pick of different industries and different people.” “I think it was so fun. I learned a lot from the perspective, the questions, the answers.” Guests mingling at 2Gather: ChicagoThe opportunity to connect with other Google Cloud Customers was also a major value-add for the Google and C2C Partners in attendance. “It was excellent. I learned a lot about Google’s partnership with some of its customers, and got to network with some excellent people,” said Brendan O’Donnell (@bpod1026), a customer success manager at Aiven, which joined C2C as a partner after sending employees to multiple C2C events this Spring and Summer. “I met some representatives from Salesforce. Jeff from Salesforce.”Unlike Meiling, Jeff Branham (@Branham24), current Director of Industry Alliances at Salesforce, knew all about C2C. In fact, as many of our members will remember, Jeff served as C2C’s first Executive Director before moving on to his new role. He was excited to be able to attend a C2C event in person, having left the company with COVID quarantine measures still in place, and was pleased to see how the team had grown. He was also pleased to be able to make some connections of his own, particularly with Paul Lewis of Pythian, who gave him some valuable insights as a representative of a Google partner company about collaboration between CTOs and CFOs.Meiling was also excited to be able to hear from a CTO, as a practitioner who hopes to someday be able to move into an equivalent role. “Since day one of working at Rockwell I wanted to be a data scientist,” she said. “I was the Business Intern, then Data Analyst Intern, then IT Associate, then Data Scientist, then finally Senior Data Scientist, so it was a long journey.” Now that she has reached this point in her career, Meiling is grateful to be able to connect with leaders who inspire her to take the next step professionally. She looks forward to more opportunities to do so at C2C events.“I would like to know what other people are doing at their own company,” she said. “I hope I will be invited.” Extra Credit:  

Categories:AI and Machine LearningC2C Community Spotlight

The Why and the How of AI and ML Insights: An Interview with Pythian CTO Paul Lewis

On August 11, 2022, C2C will host 2Gather: Chicago, the Google Cloud customer community’s first in-person event in the Chicago area. Moderated by Lilah Jones, Head of Corp Sales, Central US, Google Cloud, the event program will feature speakers Francisco Maturana, a data architect at Rockwell Automation, Vrinda Khurjekar, Senior Director of AMER Business at Searce, and Pythian CTO Paul Lewis. The panel will discuss the technical and business advantages of using AI and ML on Google Cloud. In advance of the event, we reached out to Paul Lewis, an engaged and active member of our community who joins us from our foundational platinum partner Pythian, to discuss AI and ML insights, connecting business and technical collaborators, and the value of a peer-to-peer Google Cloud community. Pythian has received significant industry recognition for its data solutions. To what extent today does a data solution necessarily require an AI or ML component? It is fair to say that most data solutions have a “why,” and that why is because I’m trying to create some sort of insight. Insight might be for the purpose of creating a new customer experience, or creating some insight for efficiency, or monetizing the value of a current set of offerings, and that insight requires a combination of three things: I need to find where the data is in my core systems from my third party, I need to create analytical value in a data platform, and I need to use AI and ML algorithms to source out that piece of insight which I’ll use to make a decision. So it has all three of those components. I’d argue that if you’re starting with the end, starting with the insight, all of that technology and process is required to deliver on it. You spoke with C2C earlier this year about cloud security and the shared roles of businesses and cloud providers. When working with systems and processes that are largely automated, what cloud security considerations arise? Cloud security requires the assumption that you are going to bring your algorithms to the data versus the data to the algorithms––a really big shift from exporting data out of a production system into your laptop, producing your algorithms in your API of choice, and then sending that algorithm back up to be both trained and tested. Now it’s about training and testing in the cloud, which has access directly to those data sets internally and externally. So that’s the big shift. Moving where you’re actually both developing your model, training your model, and creating inference or executing on that model. It is the best bet to do that in the cloud.A big problem in healthcare, as you can imagine, is sharing information across organizations. Since data sharing is required to make complex diagnostic decisions, I need to be able to package up that information from a diagnostics perspective, share it amongst a group of people, and then that prediction can come together. Multiple practitioners can participate in the model development, multiple practitioners can provide input into the model and the training, and then infer it for the purpose of new patients coming in. On August 11, at 2Gather: Chicago, you’ll be speaking alongside Francisco Maturana, a data architect at Rockwell Automation, and Vrinda Khurjekar, Senior Director of AMER Business at Searce. As a CTO, how does speaking alongside both technical and business professionals influence the kind of discussion you’re able to have? My conversations tend to be balancing the difference between why and how. On the business side, what are ultimately the business goals we’re trying to achieve? It tends to boil down to something like data monetization. Now, monetization could simply mean selling your data, it could mean creating a better insight on your customers, maybe as customer segmentation, maybe it’s wrapping a non-data related product with a data-related product. Like a checking account alongside an ability to predict spending behavior changes over time. Or it might be internal, making better MNA decisions or creating some sort of efficiency in a process, or just making general business decisions better or cleaner in a sense.So, you can take that why and say, ‘well, that why can be delivered on a variety of hows.’ A how can be as simple as a query and as complex as the entire data engineering chain. And that’s the bridge between the why and the how. Not only does the data engineer or data architect get a better appreciation for the type of business decisions I need to be able to make based on this work, but the business person gets to understand the potential difficulties of making that actually true. Do you think that most customers come to a peer-to-peer panel discussion with a why or a how in mind? Yes. Very rarely is it unanswered questions. Very rarely is it, ‘I know I have some nuggets of gold here, could you possibly look into my pot and see if there’s anything interesting?’ That might have been true five years ago, but people are much more well-read, definitely on the business and the technology side. There has to be a why, and if there has to be a why, there’s one too many potential hows. What’s our best bet to the how? Data engineers, data modelers, and data scientists are the go-to person to hire. In fact it’s so complex that I now need partnerships of talent, so I might now know that I need a junior, senior, or intermediate scientist, because I don’t have that background. I don’t have that expertise, so I’ve got to lean on partnerships in order to figure that out. Is being able to find the right why for the right how what makes a community of Google Cloud customers uniquely valuable? Exactly. It’s also sharing in our expertise. There’s this huge assumption that I just have to acquire the expertise to deliver on my particular why or how, that I just need to learn Python in twenty-one days, that I just need to get another data modeler to understand what a bill is, what a person is, what a patient is, what a checking account is, but the reality is you have to balance expertise with experience. You could hire a bunch of people or train up your existing staff, but if they’ve never done it before, that’s where you need partnerships. That’s why you need a community. That’s why you need to be able to talk to your peers. That’s why you need to have these kinds of conversations, to balance what I think I can do with what’s actually possible, or what’s been done before. Are there any particular conversations you’re hoping to have at the event in Chicago? Yeah, absolutely. The conversations I’m looking to have are unique or interesting whys that I think could be compelling across a variety of industries. What I find most interesting isn’t that two retail chains have the same customer segmentation problem, it’s that you can take a customer segmentation retail and apply that to manufacturing of cookies. So, something we can reuse across these industries, because in my opinion these industry solutions are going to be on the forefront of the whys. I’m going to be able to download cookie client segmentation and then augment it for my needs. I don’t have to invent it going forward. Do you have any final thoughts to share with the Google Cloud customer community? I’m really looking forward to this particular event. It’s rare that we get to have real peer-to-peer conversations, so I’m absolutely looking forward to it, and Google’s a nice space to do it in, so, that’s always a bonus. Are you based in Chicago? Do you need to find a how for your why, or vice versa? Join Paul, the C2C Team, and the rest of our distinguished speakers at 2Gather: Chicago on August 11! Register here:  

Categories:AI and Machine LearningC2C Community SpotlightGoogle Cloud PartnersInterview