Browse articles, resources, and the latest product updates.
Trevor Marshall (@tmarshall), CTO of Current, first joined C2C in person at a 2Gather event in New York City as a guest of our partners at DoiT International. At that event, Trevor coined the unofficial Google Cloud marketing slogan “come for the credits, stay for the kubernetes,” and took home some ideas about C2C partner AMD’s compute instances. At the Cloud Adoption Summit in New York on September 12, 2023, Trevor returned for a customer panel and a fireside chat with DoiT Field Service Manager Meryl Hayes (@merylhayes).After the event, we caught up with Trevor to see what he thought of AMD’s offerings, get his thoughts on the hysteria over generative AI, and look back on his past as a concert musician. Read on below for a short interview with Trevor, a full recording of his conversation with Meryl, and a fireside chat on security and compliance with Wayne White and Lawrence Chin (@LawrenceChin) of C2C partner Palo Alto Networks. At 2Gather: New York City, you spoke onstage with Michael Brzezinski from AMD. When we spoke afterward, you mentioned you had been considering the AMD-powered C2D compute instances for a proof of concept you were working on. Did you work with AMD on that at all? We tried spinning up some workloads. We have a very C2-heavy stack. Most of what we do is throughput-based. We’re not keeping stake in a lot of processes, so a lot of the compute optimization chips that we’re using are the best thing for what we’re looking for, and that’s most likely going to be the case going forward. Because it’s the C2 family, we get the benefit of any other underlying actual physical substrate coming through. and the benefit of those improvements. We’ll probably stick with that family over the next couple of years in case something else emerges. Did you see or hear anything at this event that applied to your work in a similar way? We’ve been PCI compliant for a couple of years. We’re getting our SOC 2 attestation this year, so standardized control frameworks regarding regulatory oversight and technical oversight have been top of mind. There was a nice presentation from Palo Alto Networks regarding some of what’s on the horizon when it comes to technical regulation, so it was good to see that. We’ll probably look at some of what they call harmonized frameworks, because there’s all sorts of ways of looking at basically the same technical control, and so we’ve taken that approach in the past and we’ll probably just revisit that. Especially now that we’re attesting to multiple certifications. That was cool to see and get more information about. At 2Gather: NYC, you mentioned the tendency of some coders to get carried away experimenting with a new “sweet tool.” Do you have any thoughts about how to manage that tendency when it comes to generative AI? I think that there’s quite a lot of moral hazard that’s emerging. It’s so easy to now get auto-generated code through Copilot or through other products that are emerging. No doubt, if you are an efficient developer, you can leverage that type of technology to become more efficient, produce more code, and things like that, but it’s going to lead pretty quickly to an abstraction and a disconnect away from the actual business logic itself, where something goes wrong in production, you don’t really know what’s happening, and you’re probably going to produce less efficient code.Now, maybe some of these autosuggestions get so much better than the human that eventually they take care of it, but you always end up––and this is the nature of our systems––when we think about ourselves as engineers, we don’t really hold ourselves to the same high standards that a mechanical engineer would where the tolerance is zero percent for failure. Software engineering has always had this built-in, “hey. some things are going to go wrong, but we’ll have incident response and we’ll make sure that we’re operating as correctly as possible.” Especially for a company like us, where we’re in financial services, reliability is super important for us. As these gen-AI code production things come into production, and that “sweet-tooling” of, “oh, look at this sweet intelligent plug-in that now writes half my code,” I’d just keep an eye over the next couple of years on some postmortems that emerge from code that was not written by a human. I read that you used to be a musician. What was your instrument? I played orchestral percussion and I wrote music. Did anything from your music background carry over to what you do in technology? Two big things came for me from my music background. The first was discipline. I was very hardcore, conservative. I was at the Juilliard precollege. Every Saturday from 8 a.m. to 8 p.m., I’m doing music, and then every day besides that I’m practicing 5 hours a day. I wasn’t big into sports. I wasn’t that big into education at the time. Knowing what it took to be exceptional in a true meritocracy, which until very recently, especially in classical music, it’s still pretty much a complete meritocracy. There are some political navigations around that, but for the most part, you can’t fake being a great musician, because everyone ends up finding out in some way or another. Even if you look at the hyper-produced artist, what they have is beyond music, as in image. There’s an artistic integrity generally that’s really hard to fake, so that discipline was really important.The second thing was the creativity with writing music. Actually, a lot of the abstractions that would come up in writing music with metric modulation, those immediately had applications into algebra when I started studying math in college, which is the direction I took once I went to undergrad. It tickles the same part of the brain. So I think those were the two big things: the ability to imagine something and turn that into reality through creativity––you have to completely understand what you’re trying to produce when you’re writing music––and then the discipline of actually being able to produce it and get it to that place that you’re thinking of. Doit and Current at Cloud Adoption Summit New York City: Palo Alto Networks at Cloud Adoption Summit New York City: Extra Credit:
Hello C2C!This week we dive into research from Gartner's and their prediction for GenAI as part of their 2023 Emerging Tech Hype Cycle. We provide a simple perspective on how to put it into practice with examples and a few opportunities for us to meet and debate!This week's CarCast also covers:1) Leaders to know and follow at Carrefour, Walmart and more.2) How to make "GenAI Real in the Enterprise"3) The CyberSecurity Ecosystem Map And....stats from the SaaS Growth Report on how much time it takes best-in-class SaaS to reach the $1M ARR & 10M ARR milestones…As usual, the video only covers highlights of the deeper research, resources and examples that you'll find in the blog.What's your opinion on this week's topics? Leave your comment below and let's continue the conversation --Analytically yours,Bruno
How Much Does GenAI ACTUALLY Cost?! This week we dive into research from McKinsey & Company which offers 3 archetypes and a cost framework you can apply. We also discuss what's required and how enterprises can unlock the power of generative AI with details in a piece by Matt Marshall called "From data chaos to data products". As usual, the video only covers highlights of the rich research, resources and examples you'll find in the blog. This week's CarCast covers:The Best Practices of Data Leaders like Wendy's, Orange, Carrefour and Sabre. Is the IPO Market Back?! How Much Will GenAI Cost….REALLY Data chaos VS data products: How enterprises can unlock the power of generative AI Twelve Summer special highlights from How to Train Generative AI Using Your Company’s Data to Doing WPP is doing data differently!Have a great week! Bruno
Greetings C2C members! Today we’re excited to let you all know about a new content series C2C will be piloting later this Summer: a live interview series hosted on our LinkedIn account, featuring interviews with enterprise Google Cloud customers, hosted by Fulfilld’s CTO and Co-Founder Michael Pytel (@mpytel). Fulfilld is a Google digital native warehouse management orchestration platform incorporating Google Vertex AI. As a major disruptor in the warehouse management software space, Fulfilld is in contact with a range of enterprises who want to streamline their warehousing operations using tools like Fulfilld’s AI-powered spatial analysis functionality, custom digital assistant, and digital-twin. How this all happens, and how Google Cloud makes it all possible, is more than worth breaking down with an in-depth discussion.Join us on August 23, 2023 at 1:00 p.m. EST on LinkedIn to share a special conversation between Michael and Matt Hough, Cintas Vice President and Chief Information Officer. Cintas––a Fortune 500 leader––leverages Google as a strategic component within their complex landscape. C2C exists to bring companies of all sizes and across all industries together. That’s why we’re pleased to be partnering with Fulfilld, a major Google Cloud startup success story, for this series of conversations with Executive and C-Level representatives of enterprise Google Cloud customers. No registration or unique link is required; in fact, if you follow C2C on LinkedIn, you’ll get a notification as soon as the stream begins.In the meantime, take a look at some of our previous content featuring Fulfilld to get a preview of what you can expect:
We started with VentureBeat Transform with incredible lessons from practitioners and vendors in the space AND next week the MIT CDO Symposium is bound to be full of learnings from the CDOs of Visa, Universal Music Group, Sanofi, Colgate-Palmolive, Herbalife and and many more. In fact, at noon PST today, my friends Eric, Sanjeev and Jascha will be breaking this down for you all on a new program called the Inside Track!As usual, the video only covers highlights of the rich research, resources and examples you'll find in the blog. This CarCast covers:How & Where Can GenAI Help YOU? GenAI Best Practices from Wayfair, Walmart and Citi Four Types of Innovations What Makes Duolingo grow? MIT's CDO Symposium & Join me on the "Inside Track" next weekAnd some extras: Summer Reads (and re-reads) list, 8 CEO Lessons, NotebookLLM, TheCubeAI, How To Build a Chatbot for $5…and the new Napoleon movie! Have a great week! Bruno
Are You Doing GenAI Right?! Do you have the tools to future-proof Your Gen AI strategy? This week, at VentureBeat Transform, you’ll hear a lot about that from data leaders from Hyatt, Walmart, Wells Fargo, Wayfair, Baptist Health, McDonald’s and Joann Stonier, Mastercard’s CDO of 15 years…so you know you’re bound to hear some true and tested best practices.As usual, Bruno's blog covers rich research, resources and examples that will help you further your understanding of this week's topic. Get in the carcast and let's be data driven!This CarCast covers:How to Future-proof Your Gen AI strategy. How To Use AI To Deliver Value Stay above water with AI: Mind the R.A.F. T! Gartner’s AI framework, along with AI predictions… The Economic Potential of Generative AIComments? Drop them here!
Generative AI is taking the world by storm. Whether it sees it as an opportunity as a threat, every organization, in the tech world and beyond, is preparing in some way for the next phase of the generative AI boom. With speculation running wild, what most organizations need now is a clear-headed approach to using this technology effectively and sustainably. C2C partners Rackspace are providing just that. The Foundry for Generative AI by Rackspace is the company’s new suite of services that enables customers to implement AI with the right systems in place and the right values in mind. To learn more about this solution and what it can offer, C2C sat down with Rackspace CTO Srini Koushik for a long and wide-ranging discussion about the risks and the possibilities of “releasing AI into the wild.” Tell us about your background and your role at Rackspace. I’m Srini Koushik. I’ve been at Rackspace for about eighteen months. I’m the chief technology officer. I’ve got almost forty years in this industry. I got my start back in 1987 in India. When I did my master’s degree thesis, it was on artificial intelligence, and it was all in this area called “frames,” which was all about giving context to the AI you’re building. At that time, it was very rudimentary. ’87 was a long, long time ago, but when you fast-forward and see how these incremental developments have built on one another over the past 40 years, I feel as excited as a twenty-something entering this new age of AI. What is FAIR? Most people, if they’re in technology, have heard of Generative AI. One clear thing about generative AI is that it’s real and here to stay. If you go back and look at Rackspace’s 25-year history, we’ve been there whenever there’s been a technology shift where our customers needed help. At the start of the century, it was managed hosting when the web was taking off, and later the first public cloud with OpenStack when Cloud Computing emerged in 2009. We think Generative AI is such a massive opportunity that we must be here for our customers. We’ve been working with Generative AI for Rackspace’s internal applications since the start of 2023, and we feel we have compelling offerings that we are ready to bring to the market. FAIR is the global practice that we’ve set up to be able to co-create these solutions with our customers.A little bit about the name: we chose the word FAIR because it stands for Foundry for Generative AI by Rackspace, but what we liked about the name were a few things: number one, the word foundry. If you go back to the Industrial Revolution, the foundry was where you brought raw materials together with machinery and had skilled professionals create things that had value to customers. It’s precisely where we are right now. We have the materials, which is data; we have the machinery, the large language models, and skilled practitioners, our Rackers coming together to develop AI-powered solutions that are valuable to our customers.At this point, many Service companies in our industry have discussed the hundreds of use cases they’ve all identified. We have focused on converting those ideas into reality. So that’s what FAIR is. The other reason why we loved the name FAIR was that it’s a guiding principle for us to focus on the responsible and sustainable adoption of AI. This isn’t AI for AI’s sake, but it is about a responsible approach to AI that’s equitable to people, is secure, protects privacy and intellectual rights, and does so in a way that consumes the planet’s resources in such a way that we promote sustainability. What does it mean to use AI fairly and sustainably, and how does FAIR accomplish that? We decided to take many of our internal systems to Google Cloud three years ago. That was before my time, but when I got in, I was very pleased that that was the platform they picked. I’m certified as an architect on all three hyper scalers, but the first one I got certified on was Google Cloud. Google has been a leader with its stance on sustainability and its approach to open source, and these were the same core values that Rackspace was built on – so it was a great fit.The IT function within Rackspace reports to me, and so being on Google Cloud ourselves gave us an opportunity to be a pioneer with Generative AI. We’ve been a preview program for many of the products that Google has released, and it allows us to learn by doing and building solutions that help our business. We have had to learn how to build these solutions, select the appropriate large language model, tune the model, and secure and protect the privacy of data. As a mid-sized global organization, we also had to learn how to do these things frugally.People ask me about sustainability, and I don’t say it lightly, but I say, “The only green technology is the one you don’t use.” Anything that you use is going to consume electricity, consume resources. However, suppose you are very responsible about how you consume it and pay attention to that as a non-functional requirement of any solution you’re building. In that case, you’re going to end up reaping the benefits of that solution.Rackspace Intelligent Co-pilot for the Enterprise (ICE) is one of the first solutions we’re rolling out, and if we’re going to deploy Rackspace ICE, and we know what it looks like when we deploy it to twenty people, we know the best way for us to take it and deploy it to a thousand people across the globe. Where do you deploy the models, and in what Google Cloud regions? How do you tie it to clean energy? We’re not only producing the outcomes we’re looking for, but we’re also trying to make sustainability a business outcome, and that’s critical. What are some key use cases for FAIR? How can Google Cloud customers use it? How are Rackspace customers already using it? We started with these cross-domain use cases. We had two solutions that we started with. One is called RITA (Rackspace Intelligent Technology Assistant), and the other is Rackspace Intelligent Co-pilot for the Enterprise (Rackspace ICE). RITA is precisely what it sounds like. It’s an intelligent chatbot that uses intent-driven automation to automate and simplify the provisioning IT Services within Rackspace. Rackspace IT doesn’t have anyone answering the phones anymore. All the level-one support is done through automation, and then the second-level support goes to our engineers. It’s been very helpful because RITA automates the toil, freeing up our engineers to step in and become problem solvers. This is a case where AI is not replacing people but giving them an opportunity to move up in their careers. As Google Cloud continues to enhance its products, it opens up many new possibilities for us – for example, we can leverage language translation to make RITA multi-lingual, so Rackers across the globe can converse with her in their native language.The other use case, Rackspace ICE, is essentially what Google Cloud calls enterprise search. It’s, “How do you take these islands of information that sit within an enterprise and start connecting and correlating them, and expand access to this wealth of context-rich information through a friendly natural language interface so that you start to unlock solutions you didn’t even know existed?” “The emergence of Generative AI is not unlike the invention of the Gutenberg press.” The minute we start showing those capabilities, you start unlocking the possibilities in other places. I spent time with our chief legal officer yesterday, and he asked, “Can we go search our contracts? I’ve got to be able to do the same thing. I want our lawyers to focus on being lawyers and not spend the majority of their time looking for information that is relevant to what they are working on.” You can imagine that within any enterprise, so many of these areas are underinvested over the years, and they’ve grown up as silos: HR, finance, legal, and marketing. We can see Rackspace ICE solving these problems in all of these domains.Those two use cases are essential for making us more effective, every one of those applies to every customer I go to, and it goes to any customer we have. As we work with our customers, we can address challenges from a position of experience as we have dealt with challenges that our customers are likely to encounter in their journey – cloud platform setup, securing AI, Security controls and privacy controls, policies, guardrails, and governance.While Google Cloud has made the technology much easier, implementing it within an enterprise is much more involved. We’ve been advising companies on how to do that. Three months ago, we created a generative AI policy that governs the responsible use of AI within Rackspace. Now we’re applying the policy as we create these solutions, and we’re finding out that it was a good start, but we probably must continue adding more things. This is the learning process, so our customers can benefit from all our work in each of these domains. A new technology emerges every year. Why a foundry for generative AI? There’s a technology every six months these days, not even every year, but we think the emergence of Generative AI is not unlike the invention of the Gutenberg press. The invention of the Gutenberg press revolutionized the world by transforming the way ideas were communicated and knowledge was disseminated. With the movable type and mechanized printing, the press made books more accessible, accelerating the spread of information. This breakthrough democratized knowledge, fueling the Renaissance, Reformation, and Enlightenment, ultimately shaping the course of human history. Just as the Gutenberg press disrupted the dissemination of knowledge, Generative AI is redefining how we create and interact with information. Like the press, Generative AI will reshape industries, foster new ideas, and democratize artistic expression, opening doors to a future limited only by our imagination.With FAIR, we cut through all the complexities of AI and aim to make working with Generative AI easy for our customers. FAIR does three things: ideate, incubate, and industrialize. In the ideation phase, we’re trying to determine how desirable AI is to your organization. How ready are you as an organization for the advent of AI? Do you have the right policies, governance, and guardrails? We start with the database of use cases and work with customers to determine: which use cases apply to them? Which one is the first one you need to work on? And does the customer have access to the data they need to get started?In the incubation phase, we move from establishing the desirability of AI to determining whether it’s feasible to implement the use case in the organization. You may want to do this, but if you don’t have all the data or if you don’t have the skills, you’re going to run into different constraints. Feasibility is all about trying to identify those constraints and figure out how you would overcome those constraints. At the end of this incubate phase, you have something that you can take to the board. You can demonstrate based on your data and get the buy-in of the board and the leadership to be able to drive this forward.The last step in our approach is the Industrialize phase, I call this phase “releasing AI into the wild.” In the Incubate phase, the solution was available to a handful of people. Still, if you want to release it to your entire organization, you need to build new processes and techniques to manage and govern AI to ensure the desired outcomes.We’re working with our customers to co-create that journey for them and do that iteratively, and Google Cloud has allowed us to do this with the innovative products they are releasing at a breakneck pace. I’m excited about it; I go to bed, and when I wake up, they’ve released something new, and those products open up different solutions that we can co-create with our customers. We’re thrilled to be a Google Cloud partner with generative AI and data, and as we move forward and get our customers through the incubation phase, you’ll see a flurry of customer testimonials from FAIR. Extra Credit:
In March 2023, we fielded the C2C Member Pulse to observe trends, skill gaps, and challenges in Google Cloud and how our members use community to enrich their cloud experience. This infographic highlights a portion of the findings. As a valued member of the Google Cloud customer community, we understand that each of you is on a unique journey with Google Cloud. Some of you may be well-versed and advanced in your Google Cloud usage, some are exploring and building business cases, and others may be anywhere in the middle of that spectrum. We recognize that not all cloud investments are the same, considering the wide range of products and services offered by Google Cloud.We also understand that respondents’ overall level of investment in Google Cloud—whether their own choice or their company’s choice—changes to meet digital transformation needs. These findings shed light on the investments made, talent needs, and solution usage patterns within our community, which helps us prioritize how to grow our network, foster connections, and curate content in our community. Investments in Google Cloud The survey results indicate that the majority of respondents, regardless of organizational size, are making increased investments in Google Cloud. Overall, respondents are either maintaining or expanding their investments in Google Cloud. The primary reason for increased investments is the integration and consolidation of systems on the Google Cloud platform, closing in on a digital-native or cloud-first approach. On the other hand, some respondents have reported decreased investments due to transitioning to other cloud vendors, with Amazon Web Services (AWS) and Azure being the most commonly mentioned alternatives. It's worth noting that a percentage (21%) of respondents were unsure about the changes in their investment levels, possibly indicating limited involvement in the decision-making process. Google Cloud Talent As organizations increase their investments in Google Cloud, it becomes evident that there is a growing need for skilled Google Cloud talent. Interestingly, the most common way organizations find skilled talent is by having their permanent staff learn Google Cloud skills on the job. This was also consistent across all organization sizes—no matter how many people may be available to work on these products, they’ll all be learning as they go, emphasizing the ongoing learning and development required for effectively working with Google Cloud products. Solution Area Usage Among the technologies respondents use most, identity and security, as well as data cloud products, take the lead. This aligns with the challenges faced in data maintenance and integration, highlighting the significance of these solution areas.Although AI and ML are among the least currently used technologies, they are the top Google Cloud products that respondents are planning to use at their organizations. This reflects how AI will be increasingly sought-after in the future. AI-enabled software was also noted as a top technology that will be needed for future digital transformation efforts.Respondents are least aware of startup and SMB solutions from Google Cloud. More guidance may be needed to support this solution, as it is the solution most respondents have no plans to use (16%). That said, many more respondents who belong to organizations that have been active for less than one year are currently using Startup and SMB solutions (48%) with 87% awareness and 23% planning to use. Visualizing It All Take a look at how the community ranks the options they were given for Google Cloud investment levels, nurturing Google Cloud talent, and solution area usage. View image as a full-scale PDF here. Get the Most of Community As members of our community, your voices shape our initiatives. While we learned a lot about you collectively in this survey, we also want to meet your needs as an individual. Take a moment to update your profile today, and ensure you're receiving the most relevant and valuable content tailored specifically for your interests. Update Your Preferences
Don't be an "AI Tourist", These are not Bruno's words….that’s the title of a piece on the World Economic Forum blog that led him to a fascinating survey: the “2023 AI Readiness Report”. In the report we learn that 81% of companies are working with GenAI, how the focus on models has changed year over year.As usual, Bruno's blog also covers rich research, resources and examples that will help you further your understanding of this week's topic. Get in the carcast and let's be data driven!This CarCast covers:The “2023 AI Readiness Report”. Interviews of almost 3,000 ML practitioners in North America between Dec 22 and end of Jan 23. There are 5 key themes you can't afford to ignore! How Twilio, Mayo Clinic and Priceline use Gen AI, from customer support to helping practitioners identify better information, these customers are providing great examples for our community to follow. Finally, Bruno's guest describes what you need to communicate better. What do you think? Comments? Drop them here! Have a great week! --Analytically yours,Bruno
Lytics, Wpromote, Google Cloud. 2Gather Los Angeles June 6th 2023 Buzz Hays, Global Lead Entertainment Industry Solutions and Iman Ghanizada @iman, Global Head of Autonomic Security at Google Cloud opened the event by discussing that the purpose of AI is to improve what people are already doing. Whether they are writers or animators in a designated industry, AI aims to enhance the paintbrush for an artist. With trying to provide businesses with better tools, many questions surrounding security and data arose. One major question was regarding how to collect effective data that would result in projects using AI. A primary example that was discussed during the event was the entertainment industry. Many applications of AI within this industry need a sufficient amount of customer data to be developed. For the entertainment industry, identifying ad breaks and suggested content for streaming platforms are examples of AI use cases. Jascha Kaykas-Wolff from Lytics stated that mature organizations can adapt to data pipelines. Working across different departments makes the decision making process a lot easier, because it demonstrates how data is useful to certain parts of the organization. Paul Dumois , the CTO of Wpromote, also stated that businesses need to focus on specific problems to solve and retrieve data that will be helpful in providing solutions to these issues. Overall, the discussions between the panel and the audience highlighted that AI has many moving parts and trends. An organization should focus on a specific area and start with a singular project to learn about the challenges and dynamics of working with AI in real time. Additionally, analyzing the core metrics of a business and receiving top-down support can help to utilize resources when setting up projects or tasks associated with AI.
In April, C2C hosted its first 2Gather event in the Atlanta, Georgia area, a unique installment of the Let’s Talk Tech series, cohosted with Google Cloud and featuring Google Principal Developer Advocate Kelsey Hightower. Kelsey joined a roster of distinguished speakers including Alex Barnes, Head of Infrastructure at Calendly, Russ Ayres, Senior Vice President of Security Architecture and Engineering at Equifax, and Rae Williams, Director of Customer Engineering at Google, for a full program of conversations tackling some of the biggest issues facing the cloud technology space today. Read on below for a roundup of some of the terms, products, and themes the conversations covered. Open source: Any software whose source code is freely available to all users for collaborative review. Russ and Alex are both sympathizers with open source philosophy, and Kelsey is a veteran of the open source community. In his words, “When you go out into the open-source world, you get to define technology for the world.”Serverless: A cloud computing model allowing customers to access resources allocated by providers on demand without having to interact with a server, which Kelsey called “The final chapter of a mature compute pattern.”Platform engineering: Streamlining infrastructure to make tools and services immediately available to teams for operation and use. According to Alex, platform engineering allows an organization to “present all capabilities as things you can consume programmatically to unlock velocity.”Configuration management: The process for ensuring that products and systems maintain the same design and perform the same way. Russ contrasted configuration management from policy with the mantra, “If it was appropriately written it would be following the policy.” KubernetesGoogle’s open-source containerization solution for software deployment and scaling has been setting standards and unlocking potential in the cloud space for nearly a decade. Kelsey’s history with Kubernetes is well known, and Rae made sure to capitalize on the opportunity to press him for his comments in front of a live audience. For context, Kelsey compared Kubernetes to contemporary projects in DevOps, insisting that DevOps engineers too often miss the forest for the trees. “What Kubernetes represents is a ten-year-old pattern for assigning software to servers,” he said. “It’s the system you would build if you had the big picture.” CalendlyCalendly is a business communication platform offering unique scheduling solutions for organizations looking to maximize efficiency. Head of Infrastructure Alex Barnes joined Aiven Vice President of Product Marketing Amy Krishnamohan (@amy.km) for a fireside chat about the company’s journey on Google Cloud. Calendly was initially built on Ruby on Rails, but later wanted to invest in Kubernetes. Google was an obvious choice for a hosting solution, especially since calendar management is so essential to the company’s offerings. “Google Calendar is a massive part of what we interact with,” Alex explained. “How better to build on that relationship than to build on their platform?”Alex expanded on these comments via email after event. “The partnership with Aiven and Google Cloud has allowed us to build a robust cloud data infrastructure that can handle the demands of our users,” he told C2C. “And it has given us the flexibility to scale up quickly, without having to worry about managing the underlying infrastructure.” AIPredictably, many members of the audience were excited to ask questions about recent developments in AI. Kelsey was quick to dispel any notions that AI is going to make tech professionals irrelevant or push them out of the market. Instead, he said, AI is providing a new baseline for innovation. When looking at generative AI solutions like ChatGPT, Kelsey said, users and developers should ask, “What is the dataset, and where is that dataset created?” Tech practitioners can still develop new solutions beyond the limits of the datasets used to train these products. “If ChatGPT can generate the code,” Kelsey told the audience, “It means we need a different interface.” SecuritySecurity was the main focus of Russ’s comments during the customer panel with Alex and Kelsey. As Senior Vice President of Security Architecture and Engineering at Equifax, Russ is responsible for securing highly sensitive personal data at one of the leading financial services organizations. However, Russ originally came to security as a developer. Coming from that background, Russ believes that “Good security is good engineering,” and vice versa. “Most security solutions try to be everything to everyone,” he added. From his point of view, though, teams should aim to build with purpose on the front end so that solutions are designed to run efficiently and at scale with minimal risk. Hear from more industry leaders on these topics and join the conversation in person at our upcoming 2Gather event in Los Angeles: 2Gather Los Angeles: The Future is Now, Security and AI
Companies use AI and machine learning (ML) to drive innovation and create a competitive advantage in their respective marketplaces. Excitement about AI is everywhere right now, so we wanted to bring together some of the top startups disrupting the AI space and bringing valuable solutions to enterprise customers. During this virtual event, four fast-growing startups shared their experiences using AI and ML.The promise of AI for business is immense. AI can help businesses automate tasks, improve efficiency, and make better decisions. However, AI is still a relatively new technology. This event covered topics such as: Increasing affordability and accessibility of AI, which is driving AI adoption in the enterprise space Use of AI to automate a wide range of tasks, from customer service to fraud detection Use of AI to improve efficiency and decision-making in a variety of industries At the end of this webinar, participants had a better understanding of AI and its potential applications. They discussed best practices, success stories, and practical lessons about what to keep in mind if this technology is something you are considering implementing as part of your business's strategic roadmap for the future.Watch the full recording here:
This week, Bruno invites a special guest to discuss the latest in AI, break down Gartner's latest "Spaghetti graphic" as well as discuss what can make Artificial Intelligence scale. Gartner shows Amazon as the #1 vendor this year and Google as the fastest mover of the top 4. Check out the details here.This CarCast covers:What’s the deal with Gartner’s latest "Spaghetti graphic": the latest database management vendors' positions have been released by Gartner. Why should people care?! “How Can You Scale A.I”? Bruno explains the 3 attributes of a scalable Artificial Intelligence strategy. In summary, how can your team make the "I" in AI bigger than the "A". A survey to get to the "Truth in AI": do the latest developments in A.I make you more hopeful or more worried? See what others say and vote here.Have a great week!
The following article was written by C2C Global President Josh Berman (@josh.berman) as a member exclusive for TechCrunch. The original article is available here. In many ways, 2022 was a year of growth for the cloud technology space. Unpredictable macroeconomic developments saw many organizations thinking about and preparing for greater wins in the years to come instead of right away.In 2023, much of this preparation could come to fruition as the growth achieved in 2022 contributes to a stronger economy and rapid advancements, particularly in tech.Global IT spending is projected to climb by 5.1% to $4.6 trillion in 2023, according to Gartner, driven by a 11.3% increase in investments in cloud applications to $879.62 billion. What does this kind of increased spending and investment mean for organizations? C2C Global, a Google Cloud customer community, has identified five cloud trends to watch in 2023. “Moving forward, custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm.” AI and ML tech adoption will rise Every organization wants to harness the many and varied capabilities of AI and ML technology. Some want to use their data to enhance analytics and build predictive models, and others want to automate repeatable processes.Currently, many AI and ML models require extensive testing and training before they can be implemented at scale across large organizations hosting petabytes of data or serving wide customer bases. In fact, C2C’s research has found that only 47% of respondents are currently using AI and ML. However, these technologies ranked high among the ones that respondents hope to adopt in the future.The promise of these technologies is too significant to ignore. As models are refined, and training and testing become more reliable and automatic, organizations will come to rely on these technologies more. We’ll see more low-code/no-code app development platforms Partly due to the rush to adopt AI and ML technologies that still require a lot of maintenance to perform reliably at scale, development teams are likely to implement low-code and no-code applications to reap the benefits of these technologies without the burden.For skilled developers, low-code and no-code options promise a lower barrier to entry for introducing and managing complex models. Significant savings in terms of time and cost, as always, will also be a massive draw. More organizations will host resources in multicloud environments Every cloud strategy requires delicate analysis to determine the proper balance of cost, efficiency, performance, scalability and security. For a lot of organizations, sticking with a major cloud provider promises attractive savings that make a lot of practical sense.However, as cloud technology grows, individual products will be just as attractive to companies prioritizing scaling and transformation. Moving forward, even for companies using one cloud provider, adopting and implementing new resources from other providers may add value, and custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm. Remote work tools will continue to improve While remote work emerged during the pandemic as an emergency measure, the tools developed to accommodate it are now available as part of the expanded landscape of hybrid work technology. As AR and VR technology become more viable, organizations will continue to introduce and adopt new means of building a work environment that suits the needs of a diverse and changing workforce. Cloud adoption will increase in formerly resistant sectors Until recently, organizations in the government and financial services used to resist transformation due to the risk and burden of retiring entrenched legacy systems and migrating massive amounts of data. Lately, though, the advantages of cloud adoption have been harder to ignore, and more organizations in these industries are adapting accordingly.For example, the U.S. Army recently said it would start using Google Workspace for its personnel operations. This expansion into previously less served areas of the cloud market speaks volumes for cloud adoption.
On November 10, 2022, C2C returned to Google’s offices in Chelsea, Manhattan for a 2Gather event all about intelligent automation. The robust event program included a fireside chat with representatives of Granite and Becton, Dickinson, and Company moderated by C2C partner Automation Anywhere, a presentation from partner Palo Alto Networks, a conversation between partner Workspot and their customer MSC, and a panel featuring the speakers from MSC, Workspot, BD, and Granite. Google’s Drew Hodun introduced and moderated the event program, but the majority of the content was driven by the participating customers and partners and the guests in attendance with questions and ideas to share with the speakers and with one another.After a hello and a word on C2C from director of partnerships Marcy Young (@Marcy.Young) and an opening address from Drew, Ben Wiley of Automation Anywhere introduced Paul Kostas of Granite and Nabin Patro of BD. and offered some background about Automation Anywhere’s mission to build digital workforces for organizations that need them, with a particular focus on business processes like data entry, copy and paste, and parsing emails. Ben also mentioned Automation Anywhere and Google Cloud’s joint solutions for office departments like contact centers. Paul made a point of shouting out solutions like AA’s Automation 360 and Google Cloud’s Doc AI, which Granite used to build 80 automations in 9 months, and Nabin touched on how automation helped manage some of the work that went into BD’s manufactured rapid diagnostic test kit for COVID-19. “The technology is forcing us to think differently.” Next, Akhil Cherukupally, and David Onwukwe of Palo Alto Networks took the stage to walk through some of the technical components of the security platforms the company offers organizations navigating the cloud adoption process. Then Workspot’s Olga Lykova (@OlgaLykovaMBA) brought up Google Enterprise Account Executive Herman Matfes and Dung La and Angelo D’Aulisa of MSC for a look back through the history of the companies’ work together. Olga started things off with an origin story about the Citrix leaders who left their company to start a cloud-hosted platform with Workspot, which turned out to be a superior business model. Then she turned to the other guests to explore how Workspot helped MSC build automations on the front end of their business processes and ultimately implement these automations end to end.Speaker Panel at 2Gather: New York CityFinally, Drew, Angelo, Dung, Paul, and Nabin returned to the stage for a panel discussion breaking down all of the issues raised during the previous sessions. A question from Drew about how each organization’s work has impacted its customers prompted Paul to go long on the benefits of Granite’s services. When Angelo gently added, “We’re a Granite customer,” the audience laughed along with the panelists. “Thank you for being a customer,” Paul said. Drew also asked the group about what’s coming next at each company. The answers ranged from the concrete to the philosophical. “The technology is forcing us to think differently,” Nabin observed. In response to a question from a guest in the audience, Paul acknowledged the human impact of automation and stressed the importance of getting people to feel good about automating processes rather than fearing for the future of their jobs.As usual, the conversations did not stop here. The speakers and guests continued to share ideas and brainstorm solutions into the networking reception and even the informal dinner that followed, where Clair Hur (@write2clair) of Vimeo stopped by to explain how the company is cutting costs significantly after migrating from AWS to Google Cloud. More of these stories will be collected in our upcoming monthly recap post. For now, watch the full recording of the New York event here: Extra Credit:
Before Shalini Mayor (@smayor) brought her background in automation to leadership roles in the private sector, she “almost became an astronaut.” As a subcontractor to NASA’s Langley Research Center, Shalini worked on various coding and algorithm development projects. She may have moved on from NASA before experiencing space travel, but much of her work as a director of Enterprise Automation at Salesforce is not unlike observing Earth from a distance. “With the explosive growth that you’ve seen at Salesforce, it’s very easy to get disconnected,” she told the crowd at a C2C 2Gather event in Sunnyvale, California. “Everything runs as a little startup within itself…when I was brought in, my primary role was to bring some structure to this madness.”“Since then we’ve been scaling out,” she continued, “trying to figure out ‘where do we have the most repetitive processes?’” Finance, HR, and IT operations are the major sites of repetitive processes at Salesforce, according to Shalini. What does it take to automate processes in so many different areas at a company with over 77,000 employees? In Shalini’s opinion, it takes more than just robotic process automation (RPA). “What we’re looking at really is a business process end to end,” she told Sunnyvale. “RPA is a small part of it. What about the rest of it? How do we reduce manual intervention in any process? How do we actually take that away so that it will just run?”To answer these questions, Shalini is thinking beyond the scope of the automation currently adopted at most organizations, sometimes back to the math and science she studied in graduate school as the basis for her education in AI and ML. “Anything that you look at all the way back down to the rudiment, it’s still exactly the same,” she said in an interview after the Sunnyvale event. Even though the extent of what’s possible with automation today is “mind-boggling,” the automations themselves are still based on the same linear algebra as the first AI and ML models Shalini encountered as a student. For Shalini, thinking about the foundations of automation makes it possible to look beyond RPA bots and straightforward rule-based models, incorporating approaches like decisioning and illuminating new opportunities. “What we’re looking at really is a business process end to end.” At Salesforce, these new possibilities include Natural Language Processing (NLP) and Natural Language Understanding (NLU) technologies like Google Cloud’s Document AI and other solutions in high demand at the company’s contact centers, which Shalini sees as high-priority contexts for automation use cases. Despite her enthusiasm for automation, however, Shalini is careful not to forget the human factor of workplace processes. She is not interested in reducing or combining job roles, as some workers fear executives may plan to do with automation in place. “If I can take some of these mundane tasks off people’s lists,” she told Sunnyvale, “that’s where the growth comes in.”This human factor is also what Shalini recognizes as the value of a customer community and open spaces for peer-to-peer discussion like C2C’s events. At Sunnyvale, she particularly appreciated “the fact that I could speak with so many people and help them learn something” and “learning that people are facing similar issues.” on November 10, 2022, Shalini’s colleague at C2C Partner Automation Anywhere, Vice President of Commercial Sales Ben Wiley, will appear alongside a diverse panel of guests to elaborate on some of what Shalini discussed in Sunnyvale, face-to-face, with a fresh group of Google Cloud customers and partners looking to automation to solve their business problems. To join them, use this link to register today. Extra Credit:
When Meiling He, Senior Data Scientist at Rockwell Automation, was asked at the last minute to fill in for her manager, Francisco Maturana (@maturanafp), at 2Gather: Chicago, she had never heard of C2C Global. The next day, she was on a train from Milwaukee preparing to speak at the Google Cloud Customer Community’s first face-to-face event in the Midwestern US. “Yesterday was the first time I heard about this, at around 3:00 p.m.,” she said. “It was new, but my manager sent me the information about what questions would be asked, and he did have his preparation for the event, so I got the information I needed.”From left: Lilah Jones, Paul Lewis,Meiling He, and Vrinda KhurjekarMeiling presented alongside Pythian CTO Paul Lewis, who spoke to C2C in advance of the event about how the company prepares data sets to be used for a variety of AI and ML solutions, and Vrinda Khurjekar, Senior Director of AMER Business at Searce. The panel discussion, moderated by Google Head of ISV’s and Marketplace Sales Lilah Jones, explored how businesses can use AI and ML solutions in general to get the most value out of their cloud adoption. Even though she had had so little time to prepare for it, Meiling’s experience at the event was a pleasant surprise: “I think it was so fun. I learned a lot from the perspective, the questions, the answers. It’s so nice to be around people like Lilah and Paul. They’re so knowledgeable and outgoing.”Meiling was also pleasantly surprised to be able to make her own connections following the scheduled program with other customers in attendance. She appreciated having the chance to talk shop with a fellow data practitioner, Revantage Data Engineer Trevor Harris. Many of the other guests in attendance were satisfied with the opportunity to network as well. “It’s a great place to connect with other professionals, business and also technical, and it’s a really wonderful experience,” said Henry Post of US Bank. “Great food, great presentation, and great people.” Jeff Parrish (@Jeff P) of Redis agreed. “I thought it was excellent,” he said. “It was a good flow, good panel, good interaction, and a good pick of different industries and different people.” “I think it was so fun. I learned a lot from the perspective, the questions, the answers.” Guests mingling at 2Gather: ChicagoThe opportunity to connect with other Google Cloud Customers was also a major value-add for the Google and C2C Partners in attendance. “It was excellent. I learned a lot about Google’s partnership with some of its customers, and got to network with some excellent people,” said Brendan O’Donnell (@bpod1026), a customer success manager at Aiven, which joined C2C as a partner after sending employees to multiple C2C events this Spring and Summer. “I met some representatives from Salesforce. Jeff from Salesforce.”Unlike Meiling, Jeff Branham (@Branham24), current Director of Industry Alliances at Salesforce, knew all about C2C. In fact, as many of our members will remember, Jeff served as C2C’s first Executive Director before moving on to his new role. He was excited to be able to attend a C2C event in person, having left the company with COVID quarantine measures still in place, and was pleased to see how the team had grown. He was also pleased to be able to make some connections of his own, particularly with Paul Lewis of Pythian, who gave him some valuable insights as a representative of a Google partner company about collaboration between CTOs and CFOs.Meiling was also excited to be able to hear from a CTO, as a practitioner who hopes to someday be able to move into an equivalent role. “Since day one of working at Rockwell I wanted to be a data scientist,” she said. “I was the Business Intern, then Data Analyst Intern, then IT Associate, then Data Scientist, then finally Senior Data Scientist, so it was a long journey.” Now that she has reached this point in her career, Meiling is grateful to be able to connect with leaders who inspire her to take the next step professionally. She looks forward to more opportunities to do so at C2C events.“I would like to know what other people are doing at their own company,” she said. “I hope I will be invited.” Extra Credit:
On August 11, 2022, C2C will host 2Gather: Chicago, the Google Cloud customer community’s first in-person event in the Chicago area. Moderated by Lilah Jones, Head of Corp Sales, Central US, Google Cloud, the event program will feature speakers Francisco Maturana, a data architect at Rockwell Automation, Vrinda Khurjekar, Senior Director of AMER Business at Searce, and Pythian CTO Paul Lewis. The panel will discuss the technical and business advantages of using AI and ML on Google Cloud. In advance of the event, we reached out to Paul Lewis, an engaged and active member of our community who joins us from our foundational platinum partner Pythian, to discuss AI and ML insights, connecting business and technical collaborators, and the value of a peer-to-peer Google Cloud community. Pythian has received significant industry recognition for its data solutions. To what extent today does a data solution necessarily require an AI or ML component? It is fair to say that most data solutions have a “why,” and that why is because I’m trying to create some sort of insight. Insight might be for the purpose of creating a new customer experience, or creating some insight for efficiency, or monetizing the value of a current set of offerings, and that insight requires a combination of three things: I need to find where the data is in my core systems from my third party, I need to create analytical value in a data platform, and I need to use AI and ML algorithms to source out that piece of insight which I’ll use to make a decision. So it has all three of those components. I’d argue that if you’re starting with the end, starting with the insight, all of that technology and process is required to deliver on it. You spoke with C2C earlier this year about cloud security and the shared roles of businesses and cloud providers. When working with systems and processes that are largely automated, what cloud security considerations arise? Cloud security requires the assumption that you are going to bring your algorithms to the data versus the data to the algorithms––a really big shift from exporting data out of a production system into your laptop, producing your algorithms in your API of choice, and then sending that algorithm back up to be both trained and tested. Now it’s about training and testing in the cloud, which has access directly to those data sets internally and externally. So that’s the big shift. Moving where you’re actually both developing your model, training your model, and creating inference or executing on that model. It is the best bet to do that in the cloud.A big problem in healthcare, as you can imagine, is sharing information across organizations. Since data sharing is required to make complex diagnostic decisions, I need to be able to package up that information from a diagnostics perspective, share it amongst a group of people, and then that prediction can come together. Multiple practitioners can participate in the model development, multiple practitioners can provide input into the model and the training, and then infer it for the purpose of new patients coming in. On August 11, at 2Gather: Chicago, you’ll be speaking alongside Francisco Maturana, a data architect at Rockwell Automation, and Vrinda Khurjekar, Senior Director of AMER Business at Searce. As a CTO, how does speaking alongside both technical and business professionals influence the kind of discussion you’re able to have? My conversations tend to be balancing the difference between why and how. On the business side, what are ultimately the business goals we’re trying to achieve? It tends to boil down to something like data monetization. Now, monetization could simply mean selling your data, it could mean creating a better insight on your customers, maybe as customer segmentation, maybe it’s wrapping a non-data related product with a data-related product. Like a checking account alongside an ability to predict spending behavior changes over time. Or it might be internal, making better MNA decisions or creating some sort of efficiency in a process, or just making general business decisions better or cleaner in a sense.So, you can take that why and say, ‘well, that why can be delivered on a variety of hows.’ A how can be as simple as a query and as complex as the entire data engineering chain. And that’s the bridge between the why and the how. Not only does the data engineer or data architect get a better appreciation for the type of business decisions I need to be able to make based on this work, but the business person gets to understand the potential difficulties of making that actually true. Do you think that most customers come to a peer-to-peer panel discussion with a why or a how in mind? Yes. Very rarely is it unanswered questions. Very rarely is it, ‘I know I have some nuggets of gold here, could you possibly look into my pot and see if there’s anything interesting?’ That might have been true five years ago, but people are much more well-read, definitely on the business and the technology side. There has to be a why, and if there has to be a why, there’s one too many potential hows. What’s our best bet to the how? Data engineers, data modelers, and data scientists are the go-to person to hire. In fact it’s so complex that I now need partnerships of talent, so I might now know that I need a junior, senior, or intermediate scientist, because I don’t have that background. I don’t have that expertise, so I’ve got to lean on partnerships in order to figure that out. Is being able to find the right why for the right how what makes a community of Google Cloud customers uniquely valuable? Exactly. It’s also sharing in our expertise. There’s this huge assumption that I just have to acquire the expertise to deliver on my particular why or how, that I just need to learn Python in twenty-one days, that I just need to get another data modeler to understand what a bill is, what a person is, what a patient is, what a checking account is, but the reality is you have to balance expertise with experience. You could hire a bunch of people or train up your existing staff, but if they’ve never done it before, that’s where you need partnerships. That’s why you need a community. That’s why you need to be able to talk to your peers. That’s why you need to have these kinds of conversations, to balance what I think I can do with what’s actually possible, or what’s been done before. Are there any particular conversations you’re hoping to have at the event in Chicago? Yeah, absolutely. The conversations I’m looking to have are unique or interesting whys that I think could be compelling across a variety of industries. What I find most interesting isn’t that two retail chains have the same customer segmentation problem, it’s that you can take a customer segmentation retail and apply that to manufacturing of cookies. So, something we can reuse across these industries, because in my opinion these industry solutions are going to be on the forefront of the whys. I’m going to be able to download cookie client segmentation and then augment it for my needs. I don’t have to invent it going forward. Do you have any final thoughts to share with the Google Cloud customer community? I’m really looking forward to this particular event. It’s rare that we get to have real peer-to-peer conversations, so I’m absolutely looking forward to it, and Google’s a nice space to do it in, so, that’s always a bonus. Are you based in Chicago? Do you need to find a how for your why, or vice versa? Join Paul, the C2C Team, and the rest of our distinguished speakers at 2Gather: Chicago on August 11! Register here:
Machine Learning is an essential component of every major tech product today. With tools like BigQuery ML, you don’t have to be a data scientist to quickly and easily incorporate ML into your applications.At a recent C2C Deep Dive event hosted by the Google Cloud startups team, Google Cloud AI/ML Specialist Customer Engineers Mike Walker and Rob Vogelbacher explained how you can use BigQuery ML to power insights for you and your customers. There are many built-in algorithms for regression, classification, clustering, forecasting, and recommendations that you can train with just a few lines of SQL. All these help you learn more from your data in a short time and in a cost-effective way. The models you build can be called from BigQuery or from external applications.The recording from this session includes the following topics:(0:00) Introduction from C2C (2:35) What is BigQuery? (6:00) Decoupled storage and compute on BigQuery (8:00) Typical ML Workflow (10:00) BigQuery ML and AI (11:30) BigQuery ML-supported models and features (17:30) BigQuery Use cases (18:30) BigQuery Explainable AI (21:05) AutoML Tables and BigQuery ML (23:25) BigQuery ML Example Models: Miami Housing Dataset (41:30) Audience Q&AWatch the full recording of the conversation below: Extra Credit:
Machine Learning is an important component of every major tech product today. However, not everything beyond excel sheets is big data, and not all big data problems require ML. The most important function of ML should be to supplement the product.Decision makers in the ML and big data spaces should know how an ML mindset differs from a traditional software development mindset. Hear from startup mentor, program manager, and trained architect KC Ayyagari (@kcayyagari), Senior Customer Engineer at Google Cloud.The recording from this session includes the topics listed below, plus plenty of conversation infused in the presentation from open Q&A from community members present at the live event:(0:00) Welcome and introduction from C2C and the Google Startups Team (3:30) Agenda overview (5:00) What is Machine Learning? (16:55) How ML is different from normal software development and how to represent physical problems in data (42:30) The do’s, don’ts, and focus areas in the ML mindset for managersWatch the full recording below: Preview What's NextJoin the Google Cloud Startups group to stay connected on events like this one, plus others we have coming up:
Druva Reddy, a Solutions Architect specializing in ML at Google Cloud, discussed Vertex AI, which brings all of Google Cloud’s ML services together under one unified UI and API. In Vertex AI, you can now easily train and compare models using AutoML or custom code training and store all of your models in one central model repository. In this overview session, Druva covered some major components of the Vertex AI platform, from training to prediction to MLOps services. This recording also includes a demo of an end-to-end example that shows these services in action.Review all parts of the presentation, including:(00:00) Introduction to Google Cloud Startups team (05:05) Introduction to functional solutions with AI (10:15) ML on GCP with Vertex AI What’s included in Vertex AI Choosing the right tools or pre-trained models Low/No code (25:55) Operationalizing ML MLOps, life cycle, and framework Using Vertex AI with MLOps (32:55) Vertex AI demo (44:05) Open community questions Extra credit: Google Cloud Vertex AI Docs Get started in Cloud Console Best practices for implementing machine learning on Google Cloud To connect with Druva, reach out to him directly in the Google Cloud Startups community and tag @Druva Reddy
On Tuesday, March 8, also known as International Women’s Day, C2C France Team Leads @antoine.castex and @guillaume blaquiere were excited to welcome Google Lead Developer Advocate @Priyanka Vergadia to host a powerful session for the Google Cloud space in France and beyond. These sessions intend to bring together a community of cloud experts and customers to connect, learn, and shape the future of cloud. At this C2C Connect event, Vergadia led a broad and enthusiastic discussion about Vertex AI and the MLOps pipeline. 60 Minutes Summed Up in 60 Seconds ML and AI are the cornerstone technologies of any company that wants to leverage its data value. ML can be used across different platforms, including Google Cloud. BigQuery ML is a key example of serverless ML training and serving. Vertex AI is the primary end-to-end AI product on Google Cloud and interacts with many other Google Cloud products. Low-code and no-code users can reuse pre-trained Vertex AI models and customize them to fit their business use cases. It’s perfect for beginner and no-ML engineer profiles. Advanced users can leverage Vertex AI’s managed Jupyter Notebook to discover, analyze, and build their models. Vertex AI also allows users to train models at scale, to deploy serverless models, and to monitor drift and performance. As Vergadia reminded the audience, ML engineering makes up only 5% of the effort that goes into the ML workflow. The upstream steps (data cleaning, discovery, feature engineering preparation) and the downstream steps (monitoring, retraining, deployment, hyperparameter tuning) must be optimized to save time, effort, and money. To this end, VertexAI supports a pipeline definition, based on the TFX or Kube Flow pipelines, to automate the end-to-end tasks around ML engineering. This pipeline is called MLOps. Watch the full recording of the session below: Despite its 60-minute time limit, this conversation didn’t stop. VertexAI is a hot topic, and it certainly kept everyone’s attention. The group spent time discussing data warehouses, data analytics, and data lakes, focusing on products like BigQuery, Datastudio, and Cloud Storage. Attendees also offered their own feedback on the content of the session. For example, halfway through the presentation, Soumo Chakraborty asked how users can integrate ML pipelines in a CI/CD pipeline, and pipeline integration became a focal point of the remainder of the discussion. Preview What's Next These upcoming C2C events will cover other major topics of interest that didn’t make it to the discussion floor this time around: Make the Cloud Smarter, April 12, 2022 Looker In the Real World with Looker PM Leigha Jarett, May 10, 2022 (In-person event in Paris) If these are topics you’re eager to explore at future events, be sure to sign up to our platform! Extra Credit Looking for more Google Cloud products news and resources? We got you. The following links were shared with attendees and are now available to you: VertexAI BigQueryML C2C Events
The Bank of England (BoE), the world’s oldest central bank, is one of the most visible and high-profile investors in innovation. Over the last decade, it has developed its own innovation lab, with projects including The Bank of England Accelerator, Her Majesty’s Regulatory Innovation Plan and The Regulatory Sandbox. It introduced a RegTech cognitive search engine and uses artificial intelligence (AI) technologies for chatbots and predictive real-time insights. More recently, the Bank made headlines with its plans for a “digital pound” on the blockchain, called Britcoin, which will use AI in its executable smart contracts. Cognitive search engine The BoE employs a Switzerland-produced cognitive search engine as their company search solution. The tool uses AI and ML to gather data from multiple sources and deliver real-time relevant responses to users’ questions. The Bank also embeds it in its CRM to improve client conversations and reduce meeting preparation times. Users find answers to their questions up to 90% faster than they would with a manual search. This tool not only boosts productivity and improves client trust but also makes it easier and simpler for the Bank to comply with ever-changing regulations. Chatbots Chatbots the BoE uses for various services include: Functional chatbots that help customers with routine questions, such as directing callers to the closest ATMs to their locations. More sophisticated AI conversational assistants that feed customers investment recommendations and real-time market-related news, among other industry-related data. Chatbots using a combination of predictive analytics and prescriptive analytics to give decision-makers at the BoE real-time insights. Examples include helping BoE executives gauge their biggest competitors in the micro-lending space and helping them determine which customer segment they should target for their advertising for a new mobile app. Britcoin Bitcoin is the Bank of England's plan for a digital currency acceptable by retailers and other companies in lieu of debit and credit cards. Owners would have limits on how much Britcoin they could hold initially, but conversion to sterling and its transactions would take minutes. Unlike most cryptocurrencies, Britcoin will be a stablecoin, meaning it will tether itself to UK currency to avoid the problems of crypto fluctuations. Supporters appreciate that Britcoin would use AI-enabled smart contracts to execute DeFi transactions that are cheaper, faster, and more transparent than online payments and money transfers. Critics fear the innovation could lead to financial instability, along with higher loans and mortgage rates, among other problems. To resolve these issues, a task force has been assembled to report on the merits of the CBDC (Central Bank Digital Currency) by the end of this year. Why the Bank is interested in AI In her 2021 keynote address at the FinTech and InsurTech Live event on how the Bank of England uses AI, Tangy Morgan, an independent BoE advisor, described how the Bank conducted a survey assessing how banks headquartered or operated in Britain have used machine learning and data science during Covid-19, and how the BoE can profit from that report. The BoE found that the use of AI was growing at an exponential pace and could benefit the Bank in various ways. Possible applications of AI in this context include: Money laundering prevention AI to identify patterns of suspicious behavior and curb AML. Underwriting and pricing applications, where big data analytics scrutinizes customers’ risk profiles, tailoring premiums to match individual risks. Credit card fraud detection, whereby AI analyzes large numbers of transactions to detect fraud in real-time The Bank of England asservates that “developments in fintech … support our mission to promote the good of the people of the UK by maintaining monetary and financial stability.” Are you based in the UK? What do these uses of AI bring to mind for you? Write us on our platform and let us know.
The Startups Roundtable series hosted by C2C and Google Cloud Startups continued on Tuesday, Jan. 25 with another session on AI and ML, this one devoted solely to technical questions. These roundtable discussions are designed for startup founders seeking technical and business support as they realize their visions for their products on the Google Cloud Platform. This time, 10 Googlers including 6 Customer Engineers led private discussions in small groups of over forty guests from the C2C community. Watch the introduction to the event below:As in the previous Startups Roundtable, after the introduction, the hosts assigned the attendees to breakout rooms where they could ask their questions freely with the attention of the Google staff on the call. The breakout rooms in these sessions are not recorded, but C2C Community Manager Alfons Muñoz (@Alfons) joined one of the conversations to gather insights for the community. In this breakout room, Google Customer Engineer Druva Reddy (@Druva Reddy) explained how to understand the value proposition the startup is giving and how users will interact with the business. Reddy advised guests to focus on having a vision of the market and to build a product with a high level of abstraction, rather than focusing simply on the data-specific tools they are going to use.According to Muñoz, after the time allotted for the discussions in the breakout rooms ended, the conversations kept going. Guests had more questions to ask and more answers to hear from the Google team. The hosts invited all attendees to bring their questions to the C2C platform for the Googlers to answer after the event. Two guests took them up on the offer, and Reddy wrote them both back with detailed advice.Markus Koy (@MarkusK) of thefluent.me wrote:Hi everyone,I am using the word-level confidence feature of the Speech-to-Text API in my app (POC) https://thefluent.me that helps users improve their pronunciation skills. Is there an ETA when this feature will be rolled-out for production applications and if so, for which languages?@osmondng, @Druva Reddy thank you for offering to reach out to the Speech API team.Markusand Reddy wrote back:Hi Markusk,It was great chatting with you!!The Product team is aiming for Word Level Confidence General Availability stage (GA) by end of Q2 2022. Regarding languages supported, currently it supports English, French and Portuguese and that being said, multiple languages will be supported as we rollout the support for other languages in phases.Please stay tuned and checkout announcements here- https://cloud.google.com/speech-to-text/docs/languages.Thanks,Druva ReddyThe next day, Erin Karam (@ekaram) of Mezo wrote:Hello,We are looking for guidance with training our DialogFlow CX intent. Our model is limited by the 2000 limit on training phrases for a single intent. Our use case is that we are attempting to recognize symptoms from the user. We have 26 different symptoms we are trying to recognize. We have 10s of thousands of rows of training data to train for these 26 symptoms. The upper limit of 2000 is hampering our end performance. Please advise. Erinand Reddy responded:Hi Ekaram,Thanks for joining today’s session!!Default limit is 2000 training phrases per intent. This amount should be enough to describe all possible language variations. Having more phrases may make the agent performance slower. You can try to filter out identical phrases or phrases with identical structure.You don't have to define every possible example, because Dialogflow's built-in machine learning expands on your list with other, similar phrases.However, create at least 10 to 20 training phrases so your agent can recognize a variety of end user expressions.Some of the best practices i would suggest is,Avoid using similar training phrases in different intents. Avoid Special characters. Do not ignore agent validation.Let me know if that works.A startup is a journey, and no startup founder will be able to get all the answers they need in one session. That’s why the Startups Roundtable series is ongoing; more business and technical roundtables will be coming soon. For now, if you are a startup founder looking for more opportunities to learn from the Google Startups Team and connect with other startup founders in the C2C community, register for these events for our startups group:
Use cases for artificial intelligence (AI) are so many and varied that the meaning of the term itself can be hard to pinpoint. The Google Cloud Platform supports a host of products that make specific AI functions easy to apply to the problems they’re designed to solve. Vision AI is a cloud-based application designed to make computer vision applicable in a wide variety of cases. But what is computer vision, exactly?On December 8, 2021, C2C invited Eric Clark of foundational C2C partner and 2020 Google Cloud Partner of the Year SpringML to answer this question. Clark’s presentation, a C2C Deep Dive, offered an enriching explication of the concept of computer vision, as well as projections for its impact on the future of AI. Most notably, Clark used Vision AI to present multiple demonstrations of computer vision in action.To set the stage for these real-world applications, Clark offered a breakdown of the essential functions of computer vision:Next, Clark used real footage of traffic at a busy intersection to demonstrate how computer vision monitors this footage for incidents and accidents to calculate travel times:To showcase Vision AI’s video intelligence capabilities, Clark uploaded a video and applied different tags to demonstrate how computer vision recognizes and identifies individual elements of different images.Clark’s final demonstration was an in-depth look at several infrastructure maintenance use cases, starting with a look at how computer vision can be used to detect potholes and other impediments to safe road conditions:Clark’s demonstrations made clear that Vision AI is as user-friendly as it is powerful, and Clark made sure at the end of his presentation to invite attendees to make a trial account on the Google Cloud Platform and try out the API themselves. Alfons Muñoz (@Alfons), C2C’s North American Community Manager, echoed his encouragement. “It’s really easy to try it out,” he said.If you haven’t already, set up an account on the Google Cloud Platform and try using Vision AI for help with a current project, or even just for fun. Write us back in the community to let us know how it goes!
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.OK
Sorry, our virus scanner detected that this file isn't safe to download.OK