Learn | C2C Community

5 Cloud Trends to Track in 2023

The following article was written by C2C Global President Josh Berman (@josh.berman) as a member exclusive for TechCrunch. The original article is available here. In many ways, 2022 was a year of growth for the cloud technology space. Unpredictable macroeconomic developments saw many organizations thinking about and preparing for greater wins in the years to come instead of right away.In 2023, much of this preparation could come to fruition as the growth achieved in 2022 contributes to a stronger economy and rapid advancements, particularly in tech.Global IT spending is projected to climb by 5.1% to $4.6 trillion in 2023, according to Gartner, driven by a 11.3% increase in investments in cloud applications to $879.62 billion. What does this kind of increased spending and investment mean for organizations? C2C Global, a Google Cloud customer community, has identified five cloud trends to watch in 2023. “Moving forward, custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm.” AI and ML tech adoption will rise Every organization wants to harness the many and varied capabilities of AI and ML technology. Some want to use their data to enhance analytics and build predictive models, and others want to automate repeatable processes.Currently, many AI and ML models require extensive testing and training before they can be implemented at scale across large organizations hosting petabytes of data or serving wide customer bases. In fact, C2C’s research has found that only 47% of respondents are currently using AI and ML. However, these technologies ranked high among the ones that respondents hope to adopt in the future.The promise of these technologies is too significant to ignore. As models are refined, and training and testing become more reliable and automatic, organizations will come to rely on these technologies more. We’ll see more low-code/no-code app development platforms Partly due to the rush to adopt AI and ML technologies that still require a lot of maintenance to perform reliably at scale, development teams are likely to implement low-code and no-code applications to reap the benefits of these technologies without the burden.For skilled developers, low-code and no-code options promise a lower barrier to entry for introducing and managing complex models. Significant savings in terms of time and cost, as always, will also be a massive draw. More organizations will host resources in multicloud environments Every cloud strategy requires delicate analysis to determine the proper balance of cost, efficiency, performance, scalability and security. For a lot of organizations, sticking with a major cloud provider promises attractive savings that make a lot of practical sense.However, as cloud technology grows, individual products will be just as attractive to companies prioritizing scaling and transformation. Moving forward, even for companies using one cloud provider, adopting and implementing new resources from other providers may add value, and custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm. Remote work tools will continue to improve While remote work emerged during the pandemic as an emergency measure, the tools developed to accommodate it are now available as part of the expanded landscape of hybrid work technology. As AR and VR technology become more viable, organizations will continue to introduce and adopt new means of building a work environment that suits the needs of a diverse and changing workforce. Cloud adoption will increase in formerly resistant sectors Until recently, organizations in the government and financial services used to resist transformation due to the risk and burden of retiring entrenched legacy systems and migrating massive amounts of data. Lately, though, the advantages of cloud adoption have been harder to ignore, and more organizations in these industries are adapting accordingly.For example, the U.S. Army recently said it would start using Google Workspace for its personnel operations. This expansion into previously less served areas of the cloud market speaks volumes for cloud adoption.

Categories:AI and Machine LearningApplication DevelopmentHybrid and Multicloud

Thinking Differently About Automation at 2Gather: NYC

On November 10, 2022, C2C returned to Google’s offices in Chelsea, Manhattan for a 2Gather event all about intelligent automation. The robust event program included a fireside chat with representatives of Granite and Becton, Dickinson, and Company moderated by C2C partner Automation Anywhere, a presentation from partner Palo Alto Networks, a conversation between partner Workspot and their customer MSC, and a panel featuring the speakers from MSC, Workspot, BD, and Granite. Google’s Drew Hodun introduced and moderated the event program, but the majority of the content was driven by the participating customers and partners and the guests in attendance with questions and ideas to share with the speakers and with one another.After a hello and a word on C2C from director of partnerships Marcy Young (@Marcy.Young) and an opening address from Drew, Ben Wiley of Automation Anywhere introduced Paul Kostas of Granite and Nabin Patro of BD. and offered some background about Automation Anywhere’s mission to build digital workforces for organizations that need them, with a particular focus on business processes like data entry, copy and paste, and parsing emails. Ben also mentioned Automation Anywhere and Google Cloud’s joint solutions for office departments like contact centers. Paul made a point of shouting out solutions like AA’s Automation 360 and Google Cloud’s Doc AI, which Granite used to build 80 automations in 9 months, and Nabin touched on how automation helped manage some of the work that went into BD’s manufactured rapid diagnostic test kit for COVID-19. “The technology is forcing us to think differently.” Next, Akhil Cherukupally, and David Onwukwe of Palo Alto Networks took the stage to walk through some of the technical components of the security platforms the company offers organizations navigating the cloud adoption process. Then Workspot’s Olga Lykova (@OlgaLykovaMBA) brought up Google Enterprise Account Executive Herman Matfes and Dung La and Angelo D’Aulisa of MSC for a look back through the history of the companies’ work together. Olga started things off with an origin story about the Citrix leaders who left their company to start a cloud-hosted platform with Workspot, which turned out to be a superior business model. Then she turned to the other guests to explore how Workspot helped MSC build automations on the front end of their business processes and ultimately implement these automations end to end.Speaker Panel at 2Gather: New York CityFinally, Drew, Angelo, Dung, Paul, and Nabin returned to the stage for a panel discussion breaking down all of the issues raised during the previous sessions. A question from Drew about how each organization’s work has impacted its customers prompted Paul to go long on the benefits of Granite’s services. When Angelo gently added, “We’re a Granite customer,” the audience laughed along with the panelists. “Thank you for being a customer,” Paul said. Drew also asked the group about what’s coming next at each company. The answers ranged from the concrete to the philosophical. “The technology is forcing us to think differently,” Nabin observed. In response to a question from a guest in the audience, Paul acknowledged the human impact of automation and stressed the importance of getting people to feel good about automating processes rather than fearing for the future of their jobs.As usual, the conversations did not stop here. The speakers and guests continued to share ideas and brainstorm solutions into the networking reception and even the informal dinner that followed, where Clair Hur (@write2clair) of Vimeo stopped by to explain how the company is cutting costs significantly after migrating from AWS to Google Cloud. More of these stories will be collected in our upcoming monthly recap post. For now, watch the full recording of the New York event here:  Extra Credit:  

Categories:AI and Machine LearningCloud OperationsSession Recording

Introducing Intelligent Automation Everywhere With Shalini Mayor, Salesforce Senior Director of Enterprise Automation

Before Shalini Mayor (@smayor) brought her background in automation to leadership roles in the private sector, she “almost became an astronaut.” As a subcontractor to NASA’s Langley Research Center, Shalini worked on various coding and algorithm development projects. She may have moved on from NASA before experiencing space travel, but much of her work as a director of Enterprise Automation at Salesforce is not unlike observing Earth from a distance. “With the explosive growth that you’ve seen at Salesforce, it’s very easy to get disconnected,” she told the crowd at a C2C 2Gather event in Sunnyvale, California. “Everything runs as a little startup within itself…when I was brought in, my primary role was to bring some structure to this madness.”“Since then we’ve been scaling out,” she continued, “trying to figure out ‘where do we have the most repetitive processes?’” Finance, HR, and IT operations are the major sites of repetitive processes at Salesforce, according to Shalini. What does it take to automate processes in so many different areas at a company with over 77,000 employees? In Shalini’s opinion, it takes more than just robotic process automation (RPA). “What we’re looking at really is a business process end to end,” she told Sunnyvale. “RPA is a small part of it. What about the rest of it? How do we reduce manual intervention in any process? How do we actually take that away so that it will just run?”To answer these questions, Shalini is thinking beyond the scope of the automation currently adopted at most organizations, sometimes back to the math and science she studied in graduate school as the basis for her education in AI and ML. “Anything that you look at all the way back down to the rudiment, it’s still exactly the same,” she said in an interview after the Sunnyvale event. Even though the extent of what’s possible with automation today is “mind-boggling,” the automations themselves are still based on the same linear algebra as the first AI and ML models Shalini encountered as a student. For Shalini, thinking about the foundations of automation makes it possible to look beyond RPA bots and straightforward rule-based models, incorporating approaches like decisioning and illuminating new opportunities. “What we’re looking at really is a business process end to end.” At Salesforce, these new possibilities include Natural Language Processing (NLP) and Natural Language Understanding (NLU) technologies like Google Cloud’s Document AI and other solutions in high demand at the company’s contact centers, which Shalini sees as high-priority contexts for automation use cases. Despite her enthusiasm for automation, however, Shalini is careful not to forget the human factor of workplace processes. She is not interested in reducing or combining job roles, as some workers fear executives may plan to do with automation in place. “If I can take some of these mundane tasks off people’s lists,” she told Sunnyvale, “that’s where the growth comes in.”This human factor is also what Shalini recognizes as the value of a customer community and open spaces for peer-to-peer discussion like C2C’s events. At Sunnyvale, she particularly appreciated “the fact that I could speak with so many people and help them learn something” and “learning that people are facing similar issues.” on November 10, 2022, Shalini’s colleague at C2C Partner Automation Anywhere, Vice President of Commercial Sales Ben Wiley, will appear alongside a diverse panel of guests to elaborate on some of what Shalini discussed in Sunnyvale, face-to-face, with a fresh group of Google Cloud customers and partners looking to automation to solve their business problems. To join them, use this link to register today. Extra Credit:  

Categories:AI and Machine LearningGoogle Cloud Partners

Connecting Across Career Journeys at 2Gather: Chicago

When Meiling He, Senior Data Scientist at Rockwell Automation, was asked at the last minute to fill in for her manager, Francisco Maturana (@maturanafp), at 2Gather: Chicago, she had never heard of C2C Global. The next day, she was on a train from Milwaukee preparing to speak at the Google Cloud Customer Community’s first face-to-face event in the Midwestern US. “Yesterday was the first time I heard about this, at around 3:00 p.m.,” she said. “It was new, but my manager sent me the information about what questions would be asked, and he did have his preparation for the event, so I got the information I needed.”From left: Lilah Jones, Paul Lewis,Meiling He, and Vrinda KhurjekarMeiling presented alongside Pythian CTO Paul Lewis, who spoke to C2C in advance of the event about how the company prepares data sets to be used for a variety of AI and ML solutions, and Vrinda Khurjekar, Senior Director of AMER Business at Searce. The panel discussion, moderated by Google Head of ISV’s and Marketplace Sales Lilah Jones, explored how businesses can use AI and ML solutions in general to get the most value out of their cloud adoption. Even though she had had so little time to prepare for it, Meiling’s experience at the event was a pleasant surprise: “I think it was so fun. I learned a lot from the perspective, the questions, the answers. It’s so nice to be around people like Lilah and Paul. They’re so knowledgeable and outgoing.”Meiling was also pleasantly surprised to be able to make her own connections following the scheduled program with other customers in attendance. She appreciated having the chance to talk shop with a fellow data practitioner, Revantage Data Engineer Trevor Harris. Many of the other guests in attendance were satisfied with the opportunity to network as well. “It’s a great place to connect with other professionals, business and also technical, and it’s a really wonderful experience,” said Henry Post of US Bank. “Great food, great presentation, and great people.” Jeff Parrish (@Jeff P) of Redis agreed. “I thought it was excellent,” he said. “It was a good flow, good panel, good interaction, and a good pick of different industries and different people.” “I think it was so fun. I learned a lot from the perspective, the questions, the answers.” Guests mingling at 2Gather: ChicagoThe opportunity to connect with other Google Cloud Customers was also a major value-add for the Google and C2C Partners in attendance. “It was excellent. I learned a lot about Google’s partnership with some of its customers, and got to network with some excellent people,” said Brendan O’Donnell (@bpod1026), a customer success manager at Aiven, which joined C2C as a partner after sending employees to multiple C2C events this Spring and Summer. “I met some representatives from Salesforce. Jeff from Salesforce.”Unlike Meiling, Jeff Branham (@Branham24), current Director of Industry Alliances at Salesforce, knew all about C2C. In fact, as many of our members will remember, Jeff served as C2C’s first Executive Director before moving on to his new role. He was excited to be able to attend a C2C event in person, having left the company with COVID quarantine measures still in place, and was pleased to see how the team had grown. He was also pleased to be able to make some connections of his own, particularly with Paul Lewis of Pythian, who gave him some valuable insights as a representative of a Google partner company about collaboration between CTOs and CFOs.Meiling was also excited to be able to hear from a CTO, as a practitioner who hopes to someday be able to move into an equivalent role. “Since day one of working at Rockwell I wanted to be a data scientist,” she said. “I was the Business Intern, then Data Analyst Intern, then IT Associate, then Data Scientist, then finally Senior Data Scientist, so it was a long journey.” Now that she has reached this point in her career, Meiling is grateful to be able to connect with leaders who inspire her to take the next step professionally. She looks forward to more opportunities to do so at C2C events.“I would like to know what other people are doing at their own company,” she said. “I hope I will be invited.” Extra Credit:  

Categories:AI and Machine LearningC2C Community Spotlight

The Why and the How of AI and ML Insights: An Interview with Pythian CTO Paul Lewis

On August 11, 2022, C2C will host 2Gather: Chicago, the Google Cloud customer community’s first in-person event in the Chicago area. Moderated by Lilah Jones, Head of Corp Sales, Central US, Google Cloud, the event program will feature speakers Francisco Maturana, a data architect at Rockwell Automation, Vrinda Khurjekar, Senior Director of AMER Business at Searce, and Pythian CTO Paul Lewis. The panel will discuss the technical and business advantages of using AI and ML on Google Cloud. In advance of the event, we reached out to Paul Lewis, an engaged and active member of our community who joins us from our foundational platinum partner Pythian, to discuss AI and ML insights, connecting business and technical collaborators, and the value of a peer-to-peer Google Cloud community. Pythian has received significant industry recognition for its data solutions. To what extent today does a data solution necessarily require an AI or ML component? It is fair to say that most data solutions have a “why,” and that why is because I’m trying to create some sort of insight. Insight might be for the purpose of creating a new customer experience, or creating some insight for efficiency, or monetizing the value of a current set of offerings, and that insight requires a combination of three things: I need to find where the data is in my core systems from my third party, I need to create analytical value in a data platform, and I need to use AI and ML algorithms to source out that piece of insight which I’ll use to make a decision. So it has all three of those components. I’d argue that if you’re starting with the end, starting with the insight, all of that technology and process is required to deliver on it. You spoke with C2C earlier this year about cloud security and the shared roles of businesses and cloud providers. When working with systems and processes that are largely automated, what cloud security considerations arise? Cloud security requires the assumption that you are going to bring your algorithms to the data versus the data to the algorithms––a really big shift from exporting data out of a production system into your laptop, producing your algorithms in your API of choice, and then sending that algorithm back up to be both trained and tested. Now it’s about training and testing in the cloud, which has access directly to those data sets internally and externally. So that’s the big shift. Moving where you’re actually both developing your model, training your model, and creating inference or executing on that model. It is the best bet to do that in the cloud.A big problem in healthcare, as you can imagine, is sharing information across organizations. Since data sharing is required to make complex diagnostic decisions, I need to be able to package up that information from a diagnostics perspective, share it amongst a group of people, and then that prediction can come together. Multiple practitioners can participate in the model development, multiple practitioners can provide input into the model and the training, and then infer it for the purpose of new patients coming in. On August 11, at 2Gather: Chicago, you’ll be speaking alongside Francisco Maturana, a data architect at Rockwell Automation, and Vrinda Khurjekar, Senior Director of AMER Business at Searce. As a CTO, how does speaking alongside both technical and business professionals influence the kind of discussion you’re able to have? My conversations tend to be balancing the difference between why and how. On the business side, what are ultimately the business goals we’re trying to achieve? It tends to boil down to something like data monetization. Now, monetization could simply mean selling your data, it could mean creating a better insight on your customers, maybe as customer segmentation, maybe it’s wrapping a non-data related product with a data-related product. Like a checking account alongside an ability to predict spending behavior changes over time. Or it might be internal, making better MNA decisions or creating some sort of efficiency in a process, or just making general business decisions better or cleaner in a sense.So, you can take that why and say, ‘well, that why can be delivered on a variety of hows.’ A how can be as simple as a query and as complex as the entire data engineering chain. And that’s the bridge between the why and the how. Not only does the data engineer or data architect get a better appreciation for the type of business decisions I need to be able to make based on this work, but the business person gets to understand the potential difficulties of making that actually true. Do you think that most customers come to a peer-to-peer panel discussion with a why or a how in mind? Yes. Very rarely is it unanswered questions. Very rarely is it, ‘I know I have some nuggets of gold here, could you possibly look into my pot and see if there’s anything interesting?’ That might have been true five years ago, but people are much more well-read, definitely on the business and the technology side. There has to be a why, and if there has to be a why, there’s one too many potential hows. What’s our best bet to the how? Data engineers, data modelers, and data scientists are the go-to person to hire. In fact it’s so complex that I now need partnerships of talent, so I might now know that I need a junior, senior, or intermediate scientist, because I don’t have that background. I don’t have that expertise, so I’ve got to lean on partnerships in order to figure that out. Is being able to find the right why for the right how what makes a community of Google Cloud customers uniquely valuable? Exactly. It’s also sharing in our expertise. There’s this huge assumption that I just have to acquire the expertise to deliver on my particular why or how, that I just need to learn Python in twenty-one days, that I just need to get another data modeler to understand what a bill is, what a person is, what a patient is, what a checking account is, but the reality is you have to balance expertise with experience. You could hire a bunch of people or train up your existing staff, but if they’ve never done it before, that’s where you need partnerships. That’s why you need a community. That’s why you need to be able to talk to your peers. That’s why you need to have these kinds of conversations, to balance what I think I can do with what’s actually possible, or what’s been done before. Are there any particular conversations you’re hoping to have at the event in Chicago? Yeah, absolutely. The conversations I’m looking to have are unique or interesting whys that I think could be compelling across a variety of industries. What I find most interesting isn’t that two retail chains have the same customer segmentation problem, it’s that you can take a customer segmentation retail and apply that to manufacturing of cookies. So, something we can reuse across these industries, because in my opinion these industry solutions are going to be on the forefront of the whys. I’m going to be able to download cookie client segmentation and then augment it for my needs. I don’t have to invent it going forward. Do you have any final thoughts to share with the Google Cloud customer community? I’m really looking forward to this particular event. It’s rare that we get to have real peer-to-peer conversations, so I’m absolutely looking forward to it, and Google’s a nice space to do it in, so, that’s always a bonus. Are you based in Chicago? Do you need to find a how for your why, or vice versa? Join Paul, the C2C Team, and the rest of our distinguished speakers at 2Gather: Chicago on August 11! Register here:  

Categories:AI and Machine LearningC2C Community SpotlightGoogle Cloud PartnersInterview

Vertex AI Drives Conversation at C2C Connect: France Session on International Women's Day

On Tuesday, March 8, also known as International Women’s Day, C2C France Team Leads @antoine.castex and @guillaume blaquiere were excited to welcome Google Lead Developer Advocate @Priyanka Vergadia to host a powerful session for the Google Cloud space in France and beyond. These sessions intend to bring together a community of cloud experts and customers to connect, learn, and shape the future of cloud. At this C2C Connect event, Vergadia led a broad and enthusiastic discussion about Vertex AI and the MLOps pipeline. 60 Minutes Summed Up in 60 Seconds ML and AI are the cornerstone technologies of any company that wants to leverage its data value. ML can be used across different platforms, including Google Cloud. BigQuery ML is a key example of serverless ML training and serving. Vertex AI is the primary end-to-end AI product on Google Cloud and interacts with many other Google Cloud products. Low-code and no-code users can reuse pre-trained Vertex AI models and customize them to fit their business use cases. It’s perfect for beginner and no-ML engineer profiles. Advanced users can leverage Vertex AI’s managed Jupyter Notebook to discover, analyze, and build their models. Vertex AI also allows users to train models at scale, to deploy serverless models, and to monitor drift and performance. As Vergadia reminded the audience, ML engineering makes up only 5% of the effort that goes into the ML workflow. The upstream steps (data cleaning, discovery, feature engineering preparation) and the downstream steps (monitoring, retraining, deployment, hyperparameter tuning) must be optimized to save time, effort, and money. To this end, VertexAI supports a pipeline definition, based on the TFX or Kube Flow pipelines, to automate the end-to-end tasks around ML engineering. This pipeline is called MLOps. Watch the full recording of the session below:  Despite its 60-minute time limit, this conversation didn’t stop. VertexAI is a hot topic, and it certainly kept everyone’s attention. The group spent time discussing data warehouses, data analytics, and data lakes, focusing on products like BigQuery, Datastudio, and Cloud Storage. Attendees also offered their own feedback on the content of the session. For example, halfway through the presentation, Soumo Chakraborty asked how users can integrate ML pipelines in a CI/CD pipeline, and pipeline integration became a focal point of the remainder of the discussion. Preview What's Next These upcoming C2C events will cover other major topics of interest that didn’t make it to the discussion floor this time around:  Make the Cloud Smarter, April 12, 2022 Looker In the Real World with Looker PM Leigha Jarett, May 10, 2022 (In-person event in Paris) If these are topics you’re eager to explore at future events, be sure to sign up to our platform! Extra Credit Looking for more Google Cloud products news and resources? We got you. The following links were shared with attendees and are now available to you: VertexAI BigQueryML C2C Events

Categories:AI and Machine LearningSession Recording

How the Bank of England Uses AI for FinTech Innovation

The Bank of England (BoE), the world’s oldest central bank, is one of the most visible and high-profile investors in innovation. Over the last decade, it has developed its own innovation lab, with projects including The Bank of England Accelerator, Her Majesty’s Regulatory Innovation Plan and The Regulatory Sandbox. It introduced a RegTech cognitive search engine and uses artificial intelligence (AI) technologies for chatbots and predictive real-time insights. More recently, the Bank made headlines with its plans for a “digital pound” on the blockchain, called Britcoin, which will use AI in its executable smart contracts. Cognitive search engine  The BoE employs a Switzerland-produced cognitive search engine as their company search solution. The tool uses AI and ML to gather data from multiple sources and deliver real-time relevant responses to users’ questions. The Bank also embeds it in its CRM to improve client conversations and reduce meeting preparation times. Users find answers to their questions up to 90% faster than they would with a manual search. This tool not only boosts productivity and improves client trust but also makes it easier and simpler for the Bank to comply with ever-changing regulations. Chatbots Chatbots the BoE uses for various services include: Functional chatbots that help customers with routine questions, such as directing callers to the closest ATMs to their locations. More sophisticated AI conversational assistants that feed customers investment recommendations and real-time market-related news, among other industry-related data. Chatbots using a combination of predictive analytics and prescriptive analytics to give decision-makers at the BoE real-time insights. Examples include helping BoE executives gauge their biggest competitors in the micro-lending space and helping them determine which customer segment they should target for their advertising for a new mobile app.  Britcoin Bitcoin is the Bank of England's plan for a digital currency acceptable by retailers and other companies in lieu of debit and credit cards. Owners would have limits on how much Britcoin they could hold initially, but conversion to sterling and its transactions would take minutes. Unlike most cryptocurrencies, Britcoin will be a stablecoin, meaning it will tether itself to UK currency to avoid the problems of crypto fluctuations. Supporters appreciate that Britcoin would use AI-enabled smart contracts to execute DeFi transactions that are cheaper, faster, and more transparent than online payments and money transfers. Critics fear the innovation could lead to financial instability, along with higher loans and mortgage rates, among other problems. To resolve these issues, a task force has been assembled to report on the merits of the CBDC (Central Bank Digital Currency) by the end of this year. Why the Bank is interested in AI In her 2021 keynote address at the FinTech and InsurTech Live event on how the Bank of England uses AI, Tangy Morgan, an independent BoE advisor, described how the Bank conducted a survey assessing how banks headquartered or operated in Britain have used machine learning and data science during Covid-19, and how the BoE can profit from that report. The BoE found that the use of AI was growing at an exponential pace and could benefit the Bank in various ways. Possible applications of AI in this context include: Money laundering prevention AI to identify patterns of suspicious behavior and curb AML. Underwriting and pricing applications, where big data analytics scrutinizes customers’ risk profiles, tailoring premiums to match individual risks.  Credit card fraud detection, whereby AI analyzes large numbers of transactions to detect fraud in real-time The Bank of England asservates that “developments in fintech … support our mission to promote the good of the people of the UK by maintaining monetary and financial stability.” Are you based in the UK? What do these uses of AI bring to mind for you? Write us on our platform and let us know.

Categories:AI and Machine LearningIndustry SolutionsFinancial Services

Community Conversations at Startups Technical Roundtable Continue on the C2C Platform

The Startups Roundtable series hosted by C2C and Google Cloud Startups continued on Tuesday, Jan. 25 with another session on AI and ML, this one devoted solely to technical questions. These roundtable discussions are designed for startup founders seeking technical and business support as they realize their visions for their products on the Google Cloud Platform. This time, 10 Googlers including 6 Customer Engineers led private discussions in small groups of over forty guests from the C2C community. Watch the introduction to the event below:As in the previous Startups Roundtable, after the introduction, the hosts assigned the attendees to breakout rooms where they could ask their questions freely with the attention of the Google staff on the call. The breakout rooms in these sessions are not recorded, but C2C Community Manager Alfons Muñoz (@Alfons) joined one of the conversations to gather insights for the community. In this breakout room, Google Customer Engineer Druva Reddy (@Druva Reddy) explained how to understand the value proposition the startup is giving and how users will interact with the business. Reddy advised guests to focus on having a vision of the market and to build a product with a high level of abstraction, rather than focusing simply on the data-specific tools they are going to use.According to Muñoz, after the time allotted for the discussions in the breakout rooms ended, the conversations kept going. Guests had more questions to ask and more answers to hear from the Google team. The hosts invited all attendees to bring their questions to the C2C platform for the Googlers to answer after the event. Two guests took them up on the offer, and Reddy wrote them both back with detailed advice.Markus Koy (@MarkusK) of thefluent.me wrote:Hi everyone,I am using the word-level confidence feature of the Speech-to-Text API in my app (POC) https://thefluent.me that helps users improve their pronunciation skills. Is there an ETA when this feature will be rolled-out for production applications and if so, for which languages?@osmondng, @Druva Reddy thank you for offering to reach out to the Speech API team.Markusand Reddy wrote back:Hi Markusk,It was great chatting with you!!The Product team is aiming for Word Level Confidence General Availability stage (GA) by end of Q2 2022. Regarding languages supported, currently it supports English, French and Portuguese and that being said, multiple languages will be supported as we rollout the support for other languages in phases.Please stay tuned and checkout announcements here-  https://cloud.google.com/speech-to-text/docs/languages.Thanks,Druva ReddyThe next day, Erin Karam (@ekaram) of Mezo wrote:Hello,We are looking for guidance with training our DialogFlow CX intent.  Our model is limited by the 2000 limit on training phrases for a single intent. Our use case is that we are attempting to recognize symptoms from the user.  We have 26 different symptoms we are trying to recognize.  We have 10s of thousands of rows of training data to train for these 26 symptoms.  The upper limit of 2000 is hampering our end performance.  Please advise. Erinand Reddy responded:Hi Ekaram,Thanks for joining today’s session!!Default limit is 2000 training phrases per intent. This amount should be enough to describe all possible language variations. Having more phrases may make the agent performance slower. You can try to filter out identical phrases or phrases with identical structure.You don't have to define every possible example, because Dialogflow's built-in machine learning expands on your list with other, similar phrases.However, create at least 10 to 20 training phrases so your agent can recognize a variety of end user expressions.Some of the best practices i would suggest is,Avoid using similar training phrases in different intents. Avoid Special characters. Do not ignore agent validation.Let me know if that works.A startup is a journey, and no startup founder will be able to get all the answers they need in one session. That’s why the Startups Roundtable series is ongoing; more business and technical roundtables will be coming soon. For now, if you are a startup founder looking for more opportunities to learn from the Google Startups Team and connect with other startup founders in the C2C community, register for these events for our startups group: 

Categories:AI and Machine LearningC2C NewsGoogle Cloud StartupsSession Recording

C2C Deep Dive Series: Applying Computer Vision with Pre-Trained Models

Use cases for artificial intelligence (AI) are so many and varied that the meaning of the term itself can be hard to pinpoint. The Google Cloud Platform supports a host of products that make specific AI functions easy to apply to the problems they’re designed to solve. Vision AI is a cloud-based application designed to make computer vision applicable in a wide variety of cases. But what is computer vision, exactly?On December 8, 2021, C2C invited Eric Clark of foundational C2C partner and 2020 Google Cloud Partner of the Year SpringML to answer this question. Clark’s presentation, a C2C Deep Dive, offered an enriching explication of the concept of computer vision, as well as projections for its impact on the future of AI. Most notably, Clark used Vision AI to present multiple demonstrations of computer vision in action.To set the stage for these real-world applications, Clark offered a breakdown of the essential functions of computer vision:Next, Clark used real footage of traffic at a busy intersection to demonstrate how computer vision monitors this footage for incidents and accidents to calculate travel times:To showcase Vision AI’s video intelligence capabilities, Clark uploaded a video and applied different tags to demonstrate how computer vision recognizes and identifies individual elements of different images.Clark’s final demonstration was an in-depth look at several infrastructure maintenance use cases, starting with a look at how computer vision can be used to detect potholes and other impediments to safe road conditions:Clark’s demonstrations made clear that Vision AI is as user-friendly as it is powerful, and Clark made sure at the end of his presentation to invite attendees to make a trial account on the Google Cloud Platform and try out the API themselves. Alfons Muñoz (@Alfons), C2C’s North American Community Manager, echoed his encouragement. “It’s really easy to try it out,” he said.If you haven’t already, set up an account on the Google Cloud Platform and try using Vision AI for help with a current project, or even just for fun. Write us back in the community to let us know how it goes!

Categories:AI and Machine LearningAPI ManagementIndustry SolutionsGoogle Cloud Product UpdatesGovernment and Public SectorSession Recording

C2C Community Members Get in the ML Mindset

Machine Learning (ML) is a major solution business and technical leaders can use to drive innovation and meet operational challenges. For managers pursuing specific organizational goals, ML is not just a tool: it’s a mindset. C2C’s community members and partners are dynamic thinkers; choosing the right products for their major projects requires balancing concrete goals with the flexibility to ask questions and adapt. With these considerations in mind, C2C recently invited Google Cloud Customer Engineer KC Ayyagari to host a C2C Deep Dive on The ML Mindset for Managers.Ayyagari started the session by asking attendees to switch on their cameras and then ran a sentiment analysis of their faces in Vision API:After giving some background on basic linguistic principles of ML, Ayyagari demonstrated an AI trained to play Atari Breakout via neural networks and deep reinforcement learning:To demonstrate how mapping applications can use ML to rank locations according to customer priority, Ayyagari asked the attendees for considerations they might take into account when deciding between multiple nearby coffee shops to visit:As a lead-in to his talking points about the ML mindset for managers, Ayyagari asked attendees for reasons they would choose to invest in a hypothetical startup he founded versus one founded by Google’s Madison Jenkins. He used the responses as a segue into framing the ML mindset in the terms of the scientific method. Startup management should start with a research goal, he explained, and ML products and functions should be means to testing that hypothesis and generating insights to confirm it:Before outlining a case study of using ML to predict weather patterns, Ayyagari asked attendees what kinds of data would be necessary to use ML to chart flight paths based on safe weather. Guest Jan Strzeiecki offered an anecdote about the flight planning modus operandi of different airports. Ayyagari provided a unique answer: analyzing cloud types based on those associated with dangerous weather events.The theme of Ayyagari’s presentation was thinking actively about ML: in every segment, he brought attendees out of their comfort zones to get them to brainstorm, just like an ML engineer will prompt it’s machines to synthesize new data and learn new lessons. ML is a mindset for this simple reason: machines learn just like we do, so in order to use them to meet our goals, we have to think and learn along with them.Are you a manager at an organization building or training new ML models? Do any of the best practices Ayyagari brought up resonate with you? Drop us a line and let us know! Extra Credit:  

Categories:AI and Machine LearningData AnalyticsAPI ManagementSession Recording

Startup Founders Get Their Questions Answered In C2C's Google Cloud Startups Roundtable

On Tuesday, November 16, 2021, C2C hosted its first Google Cloud Startup roundtable event. This series, organized and planned specifically for representatives from startups looking to grow their businesses, brings these representatives together with Google Cloud Customer Engineers, Technical Specialists, and Startup Success Managers to lead discussions and answer questions on hot topics in the startup space. The first roundtable included group sessions for business leaders and technical staff as well as a Customer Engineer AMA, all exploring artificial intelligence (AI) and machine learning (ML), and the potential uses of each for startup businesses as they form and begin to scale.After welcoming guests and introducing the Google staffers on the call, the event’s organizers invited attendees to join breakout rooms based on whether they had come with technical or business questions to discuss. These breakout rooms were not recorded, but C2C North America Community Manager Alfons Muñoz joined the technical discussion.In this breakout room, startup founders from 86 Repair and Auralab brought their questions directly to Google’s customer engineers. According to Muñoz, “They were stating their problems or projects and getting an overview of how to approach these problems...and they had more than one overview, because we had more than one customer engineer, so they had more than one point of view. They also were encouraged to get in the community.”Most of this event’s ninety minutes were spent in the breakout rooms, but after about an hour, the groups came together again for an AMA with all of the customer engineers on the call. In this session, the visiting startup founders revisited the topic that had dominated the conversations in the breakout rooms: data. In order to use ML effectively, an organization needs a platform that can store, host, and manage data reliably.Google’s Deok Filho offered a canny on-the-spot breakdown of the relative advantages and disadvantages of integrating different Google and third-party data management tools with BigQuery, bringing in Mike Walker to field follow-up questions from Ben Collins of Auralab and Daniel Zivkovic, founder and curator of Serverless Toronto, along the way. Check out a clip of the conversation below:According to Muñoz, in terms of connecting guests to the right Google staffers and getting their questions answered, this event was a success, but, in his words, “it’s important to note that this is the first of many roundtables.” Look for more of these events for startup founders in 2022, including the next AI and ML roundtable in January: 

Categories:AI and Machine LearningGoogle Cloud NewsC2C NewsGoogle Cloud Startups

A Simple Explanation of the Bag-of-Words Model and How To Implement It

Teaching a machine model to think is one of the most challenging—and rewarding—tasks technology can accomplish. When you want your model to recognize images, you simply convert them into numbers, or vectorize them, in a process called “feature extraction” or “feature encoding.” For example, you may want to encode the image of a cat into the following vectors:The curved shape of the ear [186] Color of the iris, red [99] Paws, grey [37]But how do you train the model to recognize text? After all, text data is abstract; it’s composed of words with various conceptual referents. That’s where the bag-of-words (BoW) model comes in. Using this model, you place your words into one or more “bags,” or multiple sets, and vectorize them on a spreadsheet. This helps you classify documents, calculate probability, detect spam, and more. Read on to learn how the BoW model solves a series of common but critical problems. Natural Language Processing: Understanding Text What if I am working on an application with document-scanning capabilities and I want it to do more than just recognize text? I want to teach my ML model to understand one or more sentences. I can teach my algorithm how to convert images into binary form, but how am I going to train it on abstract text?SolutionI convert the text data into binary metrics on a spreadsheet, just as I would with vectorized images.ExampleSentence: “I like to go to the movies.”The keywords tell my ML-trained model how to understand the gist of a sentence. In this case, the sentence theme is Like; Movies. I flip those keywords into binary metrics, thus: Like [1]; movies [1]. The other words (I, to, and the) are subordinate to the keywords, so I map them on my spreadsheet as 0s: [010001].Now that I’ve trained my model to identify the theme in the sentence, it can proceed to do the heavy lifting, which is what it’s best at. In other words, my model, now trained through BoW, can predict, analyze, categorize, and so forth. Document Classification I want my application to facilitate better sorting and organization of scanned documents. I need to train my model to tell me how many times certain keywords appear in certain sentences. How can the BoW model help?ExampleSentence 1: “I like to go to the movies.”Sentence 2: “I do not like movies like this.”Each of these sentences itself is a BoW, since each is made up of a unique set of words. To determine how many times each word in the first sentence appears, I first tabulate the frequency of the words in each BoW:   BoW (1) BoW (2) I 1 1 Like 1 2 To 2 0 Go  1 0 The  1 0 Movies 1 1  Then, I can count the total number of words by adding both columns. For instance, the word “movies” appears twice in our combined bag of words. Information Retrieval I want to be able to search scanned documents for particular text data. To do so, I need to know whether certain words appear in more than one sentence. Here’s where I use the “both” feature.ExampleBoW 1: “I like to go to the movies.”BoW 2: “I do not like movies like this.”I can still use the same table, but this time I’ll add a column to keep track of which words appear in both sentences:   BoW (1) BoW (2) Both I 1 1 1 Like 1 2 1 To 2 0 0 Go  1 0 0 The  1 0 0 Movies 1 1 1  Unlike the words I, like, and movies—which appear in both sentences—the words to, go, and the only appear in BoW (1). Thus, I tag the first set of words (1) and vectorize the second set of words as (0). Scoring the Importance of Certain Terms When I refer back to my scanned documents, I want to be able to keep track of which information is most critical. Therefore, I want my model to score the frequency of certain key terms in the document as a whole.ExampleBoW 1: “I like to go to the movies.”BoW 2: “I do not like movies like this.”The BoW model is equipped with a specific feature that enables this kind of scoring: the term frequency-inverse document frequency (TFIDF) feature:   BoW (1) BoW (2) TFIDF (1) TFIDF (2) I 1 1     Like 1 2     To 2 0 2/7 0 Go  1 0     The  1 0     Movies 1 1      By comparing the frequency with which each word appears in each sentence to the number of words in the same sentence, the TFIDF feature scores each word by frequency per sentence. The word to appears 2 times out of the 7 total words in the first sentence. It appears 0 times in the second sentence. Probability Finally, I want to make sure my scanned documents are coming from a trustworthy source. The bag-of-words method is frequently used for spam detection.ExampleTake these two phrases, which could easily pass as email subject lines:“Send money to me through PayPal” “Get rich today”One is legitimate while the other is spam. How can I train my model to know which to delete? First, I use Bayes’ theorem of probability:P(L/S) vs. P(S/S)L= Legitimate; S= SpamThis theorem determines how likely a word is to be used in a spam email.Then, I categorize either word string into keywords, assigning each string a matched probability. For example: PayPal=legitimate, given the probability of 0%. The words Money, Get, Rich, and Today are each weighted 10%.Finally, I multiply spam word frequencies to get my results:  Legit “Send money to me through PayPal” 10% Spam “Get rich today” 30%  As a result, I train my ML model to conclude that sentences like BoW(2) are highly likely to be spam. Other Uses The examples above describe a series of use cases for the BoW model, but there are others, too. Here are a few more potential uses for the BoW model:Sentiment analysis, also known as opinion mining, in which online text (such as social content), is mined to evaluate the writer’s attitude. Language modeling, to determine the probability of a given sequence of words occurring in a particular string of words. Computer vision, in which particular images are given the BoW treatment. In this case, the method is called the bag-of-visual-words model. Flaws In some contexts, using the bag-of-words model can introduce unintended problems. Watch out for these potential issues when using this model: Certain documents or input data may be too sophisticated, complex, or overly large for the limited BoW model.  Too few significant words and too many words with no objective or practical meaning may result in too many null values, rendering the vectorization useless. If you forget a hyphen between words, one word, such as “home-run,” could be split into “home” and “run,” and then scored higher than it deserves, skewing results. Misspellings—such as “tank” instead of “thank,” or “gr8” for “great,” distort algorithmic results. BoW ignores linguistic nuances and context, so certain words or word strings could be scored higher than they deserve. This could be remedied with transformer-based deep learning models like Bidirectional Encoder Representations (BERT), which use neural networks to better discern the context of words in search queries.Many of these issues could also be remedied with Google Natural Language API, which applies natural-language understanding (NLU), or natural-language interpretation (NLI), to help computers understand and respond to humans in our own language.Have you ever worked with the BoW model? Would the BoW model be useful for any projects in your ML workflow? Reach out and let us know what you’re thinking. Extra Credit:  

Categories:AI and Machine Learning

Scaling an Enterprise Software: Digital Assistants and Natural Language (full video)

Michael Pytel (@mpytel), co-founder and CTO at Fulfilld, shares stories from the team’s wins and losses in building out this intelligent managed warehouse solution.The recording from this Deep Dive includes:(2:00) Introduction to Fulfilld (10:15) Natural Language Processing use case for warehouse guidance (11:40) Generating directions using Dijkstra’s algorithm (commonly used in mapping applications) to connect the shortest route between two points (13:10) Generating audio guidance for a custom map using Google Cloud Run and Text-to-Speech API (14:15) Using WaveNet to create natural-sounding, multi-language voices for text-to-speech scenarios (16:45) Building a digital assistant with Google Dialogflow Intent matching and other features Other use case examples of Google Dialogflow (21:30) Integrating voice while building applications on Flutter (22:35) Natural language alerts for warehouse operations (23:50) Big ideas: looking to the future of Fulfilld  Other ResourcesWaveNet: A generative model for raw audio Google Cloud hands-on labs Google documentation: Creating voice audio files Build voice bots for mobile with Dialogflow and Flutter | Workshop The Definitive Guide to Conversational AI with Dialogflow and Google Cloud Find the rest of the series from Fulfilld below:   

Categories:AI and Machine LearningAPI ManagementIndustry SolutionsGoogle Cloud PartnersSupply Chain and LogisticsSession Recording

Using AI to Improve Spoken Language Fluency With Markus Koy, Founder of thefluent.me

From chatbots to predictive text, all kinds of applications are using AI to navigate language barriers and facilitate communication across different communities. Many of these applications focus on text, but there is more to language than written words. Sometimes even fluent speakers of a second language will experience challenges when communicating face-to-face with native speakers. One of the best ways to overcome these challenges is to practice pronunciation.Markus Koy (@MarkusK) is an IT projects analyst with 18 years of experience across various industries. He is also a native German speaker living in an English-speaking part of Canada, and a regular visitor to C2C’s AI and ML coffee chats, which are hosted in the U.S. Koy’s experiences working in English-speaking countries as a non-native English speaker inspired him to create thefluent.me, an AI-powered app that tests speech samples and scores them based on how well they correspond to standard English pronunciation.On thefluent.me, users record themselves reading samples of English text (usually about 400 characters long), and then post them either publicly or privately on the app’s website. Within about 30 seconds, the app delivers results, reproducing the text and indicating which words were pronounced well and which can be pronounced better. Even native English speakers may find that they can improve their pronunciation, sometimes even more so than someone who speaks English as a second language.We recently approached Koy with some questions about thefluent.me, Google Cloud products, and his experience with the C2C Community. Here’s what we learned: What inspired you to develop thefluent.me? Koy began working on thefluent.me after contributing to a research project with an international language school. As a second-language English speaker himself, he had already taken the International English Language Testing System; he had found pronunciation to be the hardest part of the process.“Immediate feedback after reading a text is usually only available from a teacher and in a classroom setting,” he says. Teachers only listen to a speaker’s pronunciation once, and will likely not provide feedback on every word. Tracking progress systematically is just not feasible in a classroom setting, and sometimes non-native speakers will feel intimidated when speaking English in front of other students.Koy continued his research on AI speech-recognition programs and also graduated from Google’s TensorFlow in Practice and IBM’s Applied AI specialization programs. He decided to build thefluent.me to help students struggling to overcome these challenges. What makes thefluent.me unique? There are many apps on the market for students studying English as a second language, and thefluent.me is not the only app of this kind that uses AI for scoring. However, apps combine different features to support distinct learning needs. Koy kept these concerns in mind when designing and building the following features for thefluent.me: Immediate pronunciation feedback: The application delivers AI-powered scoring for the entire recording and word-level scoring on an easy-to-understand scale. Immediate feedback on reading speed: Besides pronunciation, the application provides feedback on the reading speed for each word. Own content: Users can add posts they would like to practice instead of using content only published by platforms. They can immediately listen to the AI read their post before practicing. Progress tracking and rewards: Users can track their activities and progress. They can revisit previous recordings and scores, check their average score, and earn badges. Group learning experience: By default, user posts are not accessible to others. However, users can also make their posts public and invite others to try, or they can compete for badges.  How do you use the Google Cloud Platform? Do you have a favorite Google Cloud product? Koy runs thefluent.me on App Engine Flexible. He likes how easy the deployment process is, especially when managing traffic between different versions. Two key Application Programming Interfaces (APIs) Koy is using are Speech-to-Text and Text-to-Speech, which Koy says allow the Wavenet voices to sound more natural. He also likes that both allow him to choose different accents for the AI speech. Koy is also using Cloud SQL and Cloud Storage, which he finds easy to integrate. What do you plan to do next? “There are many other items for horizontal and vertical scaling on my roadmap,” Koy assures us. He is planning to add additional languages and enhance the app’s group features. He has also been approached by multiple companies who want to use thefluent.me for education and training. Koy plans to publish APIs to accommodate these requests in the coming weeks. Why did you choose to join the C2C community? Like so many of our members, Koy joined the C2C community to meet people and collaborate, but his experience here has informed his work on thefluent.me beyond friendly conversation. Recently, a community member expressed to Koy that thefluent.me is an ideal tool to use when preparing for a job interview—a user can rehearse answers to interview questions to learn to pronounce them better. For Koy, this is not just nice feedback; it is also a use case he can add to his roadmap.Still, community itself is enough of a reason for Koy to return on a weekly basis. “Mondays are just not the same anymore without our AI and ML coffee chats,” he says.

Categories:AI and Machine LearningApplication DevelopmentC2C Community SpotlightAPI ManagementDiversity, Equity, and Inclusion (DEI)

What Is the Difference Between Learning Systems and Rules-Based Systems

There are two ways to train a machine. The first is to train it to recognize objects by teaching it rules. The second is to train it to recognize objects by giving it examples. The first modality is called the rules-based approach. The second is called the machine learning (ML) system.Example:You want a machine to produce certain wheels, identifiable by their company logos, rims, spokes, center caps, sizes, and other qualities. You feed the system data and rules so that it will produce these wheels, and no others. The problem with this approach is that a rule-heavy system becomes too challenging for you to maintain and too frustrating for the computer to remember. If you use the ML system, you can feed the computer examples of wheels with the correct logo, patterned center cap, number of spokes, size, and etc. The computer learns to produce the desired wheels through trial and error. Rule-Based LearningRule-based systems have four basic components: Facts, or domain of knowledge  An inference engine, which interprets the facts and takes appropriate actions through rules that include probabilistic, associative, or “If-Then” reasoning (“IF A happens THEN do B”).  A temporary working memory for briefly “remembering” those rules. A user interface that allows developers to add, subtract, or change input and output signals. The number of rules depends on the number of actions you want the system to handle, so 20 actions would require manually writing and coding at least 20 rules; the system is locked into following these rules. Machine LearningML is modeled after human intelligence, with the assumption that machine systems can learn from experience and improve their performance accordingly. ML is achieved through: Supervised learning, whereby developers use labeled input and output data to train the system. Unsupervised learning, whereby systems draw their own conclusions from unlabeled data. Semi-supervised learning, which blends supervised and unsupervised learning. Reinforcement learning, whereby the system learns through trial and error. In short, ML gives systems the ability to forage outside the box, adapt their “thinking,” and expand their capabilities. When do you use ML? When do you use rules?Each situation is different. In short: rules and ML are both easily interpretable. ML gives more accurate results (since it requires a lot of data) and is easier to maintain than a rule-based system. Rule-based training is faster, easier, and cheaper to execute. Execution again depends on the model. If you have a system with a large number of actions, then you will want to use ML for faster, cheaper, and more effective results.  TL;DR:Use a rules-based approach when: There is a small or fixed number of outcomes. For example, an “Add to Cart” button can either be clicked or not. There is a risk of false positives. Only rules, with their 100% certainty, can prevent these from occurring. Your employer/team has neither the knowledge nor the resources for ML. Use ML when: The system calls for a more unpredictable approach––the task is too complex or uncertain for rigid rules. Situations, data and events are changing faster than the ability to constantly write new rules. Linguistic nuances cannot be encapsulated by rigid rules. When you’re working on tasks that call for an understanding of language, you will want to use the adaptive capabilities of ML.  Google Cloud helps you build, deploy, and scale ML models faster, with pre-trained and custom tooling within its unified AI platform. Extra Credit:Why Is Self-Supervised Learning the Future of Machine Learning?The Difference Between Virtual Machines (VMs) and HypervisorsWhat is Automated Machine Learning?

Categories:AI and Machine Learning

What is the Difference Between SOA and Microservices?

If you work with IT or cloud computing, you’re sooner rather than later apt to come upon the microservice/ service-oriented architecture (SOA) debate. Both approaches are alike in that both break large, complex operations into smaller, more flexible components. Both scales meet the speed and operational demands of the company’s escalating data and involve cloud or hybrid cloud environments for deployment. Hereon, opinions differ. Some developers say microservices are an improvement of SOA, while others say there are key differences.Most important: Microservices are used for applications, while SOA is geared towards enterprises. What is Microservices? Certain IT projects could be too complex or large to manage, test, and deploy, so software developers fissure them into single containerized applications. Each function has its responsibility and team of developers. This helps the company speed processes, cut costs, and redress problems of open enterprise areas without dismantling operations of the whole. It also makes functions more effective and fault-resilient, among other benefits. ExampleAmazon.com divides into standalone categories (shipping, selling, customer support, etc.), where diverse teams develop and troubleshoot their particular application. That’s in contrast to the traditional monolithic architecture, where each category would be indistinct from the entire enterprise. What is Service-Oriented Architecture (SOA)? Service-oriented architecture (SOA) is just that. The enterprise constructs its IT system to deliver service rather than pivot around technical or operational aspects. Each function contains its relevant code and data integrations for achieving a particular service in SOA software architecture. As a result, the whole system is interoperable to enhance efficiency, agility, and productivity. ExampleA single security service is split into diverse components for authentication, authorization, audit, policy, encryption, and so forth. Each is furnished with its code and focuses on its delimited responsibility. (Other functions could include checking a customer’s credit, logging on to a website, or processing an application). Differences Between SOA and Microservices? Some developers insist microservices are essentially an upgraded version of service-oriented architecture (SOA), while others find the two approaches complementary. Difference include: Microservices is leaner and more agile than SOA. Microservices is open source and more functional than SOA. Microservices are standalone and smaller than most specialized components in SOA systems. Microservices are granular and narrower in their communication than SOA. Microservices can be developed, deployed, and tested faster than functions in SOA. Their lifespan is shorter. In technical terms: Microservices uses lighter-weight protocols like HTTP REST, while SOA prefers SOAP. In microservices, each service is developed with its communication protocol, while in SOA, the middleware enterprise-service bus (ESB) is used. SOA needs governance, while microservices can do without. To bring it all together with a possible use case, consider this: Enterprise-oriented SOA uses a continual flow of information and computing signals, achieved by protocols like RESTful APIs. In application-scoped microservices, synchronous communication would only cause latencies and weaken its resilience. So microservices use asynchronous communication, such as the publish/subscribe (Pub/Sub) model that helps them gain agility.  Bottom Line Both microservices and service-oriented architecture (SOA) can best be described as an army of small specialized services (soldiers) trying to conquer a massive problem together instead of one big fighter doing everything. Although some developers tag microservices as the lightweight version of SOA, the real difference is in SOA staking out the enterprise while microservices focus on applications. Either model helps managers save time and costs as it slices monolithic systems into components, making services easier to work with. Which is best for you? Both approaches speed up automation. Larger and more diverse enterprises could benefit from the broader and less granular SOA design. Smaller environments, including web and mobile applications, are easier to develop with microservice architecture. Extra Credit There’s a science, if not an art, to microservice/ SOA applications, which is why entire courses and books dedicate themselves to this topic. Here are Google Cloud’s best practices for microservice performance.

Categories:AI and Machine Learning

What is Automated Machine Learning?

Automated Machine Learning (ML) automates the steps in your ML workflow, including preparing the data, training the model, evaluating the model, tuning parameters, and generating predictions. This makes your work easier, less onerous, less time-consuming, cheaper, and more accurate.Auto ML is an emerging trend in high tech, with some conspiracy theorists warning it will eliminate your tech job. No worries! Careers in data science are here to stay, and automation just gives you more opportunities!The Purpose of Automated Machine LearningAutomated ML automates every part of your ML pipeline, from data preparation to product deployment. Features include: Cleaning the data - includes removing duplicate or relevant information, dealing with missing values, fixing structural errors, and handling outliers. Feature engineering - injects the model with features that make it more likely to give you the predictive results you want. Model selection - chooses one of many candidate models for a predictive modeling problem. Hyperparameter tuning - selects the best parameters for the model’s architecture. Model deployment - integrates the model into the production environment and verifies that it produces desired results.  Data PreparationAuto ML identifies your type of data––Boolean, discrete, continuous, or text. It also performs task detection. For example, it explores whether the data represented is binary; what is the classification? What about regression or clustering and ranking?  Finally, Auto ML examines if your data is ready accordingly.Feature EngineeringOnce the data has been cleaned and is ready for training, data scientists have the tedious task of preparing a suitable predictive model. Auto ML does all that work for you in minutes. Feature selection - chooses the best set of features for your model to help it predict as required. Data preprocessing - converts the raw (original) data into a readable format. Feature extraction - retains only the critical features and data that your model needs to become useful, eliminating anything redundant or irrelevant.  Skewed data detection eliminates or corrects skewed data (namely outliers that appear in the raw data, which will distort your data if you keep them). Missing values detection - fills in missing data (for example, if participants have omitted a survey question in the data fed to the model, the model inserts a 0).   Model SelectionModel selection includes finding the best type of model to use and the specific structure most suitable for a given data test. This is followed by model evaluation, where automation helps you scrutinize the entire process, from validation procedures to error-rooting, analysis, and configuration. Hyperparameter TuningHyperparameters are your best guesses for approximate model parameters. Done manually, this can take a while, requiring familiarity with algorithms and their strengths and weaknesses. The work needs to be thorough and carefully designed.  Unsurprisingly, there are few data scientists available for this critical step. Nevertheless, Auto ML does the task at a fraction of the cost and time and fewer errors! DeploymentAuto ML helps you deploy the model as a web service to predict new data without writing code. It also allows you to test its generated predictions and fine-tune results.Use CasesAuto ML is most commonly used for the following functions: Proof of concept - To help you decide whether the design is feasible. For example, whether to proceed with a specific software application. Baseline model - Using a good-enough model for decent results, for example, testing on a previous project to guide you in your task. Deploy to production - Auto ML is used as an end-to-end tool to expedite, improve and automate your labor.  ToolsThe most popular Auto ML applications are: RapidMiner - Free student version available Dataiku - Free community version available DataRobot - Commercial H20 Driverless - Commercial. Google Cloud AutoMLGoogle Cloud AutoML has a range of services that include the following: AutoML Vision for object detection.  Video intelligence API for classifying video segments and object tracking in videos. AutoML Natural Language and Auto ML Translation for translating textual data. AutoML Tables for prediction and classification from structured data, like databases or spreadsheets. Wrap-upAuto ML typically provides faster, more accurate outputs than hand-coded algorithms, saves companies money on training staff or hiring experts, and makes ML more accessible to novitiates or organizations that lack the funds to hire skilled data scientists. That said, Auto ML is here to improve your data efficiency, not replace it. So, although you no longer need to be involved in the step-by-step ML process, you will still want to evaluate and supervise the model. Let’s Connect!Leah Zitter, Ph.D., has a Masters in Philosophy, Epistemology, and Logic and a Ph.D. in Research Psychology. 

Categories:AI and Machine Learning

How to Run Python on Google Cloud

The Big Question—Can You Use Python in the Cloud? Python is an excellent tool for application development. It offers a diverse field of use cases and capabilities, from machine learning to big data analysis. This versatility has allowed the creation of a real niche for Python cloud computing. And now, as DevOps becomes more and more cloud-based, Python is also making its way into cloud computing as well.However, that’s not to say that running Python can’t come with its own set of challenges. For example, applications that perform even the simplest tasks need to run 24/7 for users to get the most out of their capabilities, but this can take up a lot of bandwidth—literally.Python can run numerous local and web applications, and it’s become one of the most common for scripting automation to synchronize and manipulate data in the cloud. DevOps, operations, and developers use Python as a preferred language, mainly for its many open-source libraries and add-ons. It’s also the second most common language used on GitHub repositories. Today we’re talking about running Python scripts on Google Cloud and deploying a basic Python application to Kubernetes.  How to Use Google Cloud for Programming Businesses all over the world can benefit from cloud options. Both cloud-native and hybrid structures have technological benefits like data warehouse modernization and levels of security compliance that help fortify the development process and run continuously. But running code on Google Cloud requires a proper setup and a migration strategy—specifically a Kubernetes migration strategy—if you intend to orchestrate containerization.Generally speaking, however, any code deployed in Google Cloud is run by a virtual machine (VM). Kubernetes, Docker, and even Anthos make application modernization possible for large applications. In the case of smaller scripts and deployments, a customizable VM instance is adequate for running Python script on Google Cloud and determining processor size, the amount of RAM, and even the operating system of choice for running applications.  1. Check the Requirements for Running Python Script on Google Cloud Before you can work with Python in Google Cloud, you need to set up your Python development environment. After that, you can code for the Python cloud environment using your local device, but you must install the Python interpreter and the SDK. The complete list of requirements includes: Install the latest version of Python. Use venv to isolate dependencies. Install your favorite Python editor. One popular Python Integrated Development Environment (IDE) is PyCharm. Install the Google Cloud SDK (gcloud CLI) for Python to access Google Cloud Install any third-party libraries that you prefer.  2. Google Container Registry and Code Migration To begin scheduling Python scripts on Google Cloud, teams must first migrate their code to the VM instance. For Python VM setup, many experts recommend using Google Container Registry for storing Docker images and the Dockerfile.First, you must enable the Google Container Registry. The Container Registry requires billing set up on your project, which can be confirmed on your dashboard. Since you already have the Cloud SDK installed, use the following Python gcloud command to enable the registry:gcloud services enable containerregistry.googleapis.comIf you have images from third-party images, Google provides step-by-step instructions with a sample script that will migrate to the Registry. You can do this for any Docker image that you store on third-party services, but you may want to create new projects in Python that will be stored in the cloud. 3. Creating a Python Container Image After you create a Python script, you can create an image for it. A Docker image is a text file that contains the commands to build, configure, and run the application. The following Docker example shows you the content of a Dockerfile used to build and image:# syntax=docker/dockerfile:1FROM python:3.8-slim-busterWORKDIR /appCOPY requirements.txt requirements.txtRUN pip3 install -r requirements.txtCOPY . .CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]After you create the image, you can now build it. Use the following command to build it:$ docker build --tag python-dockerThe --tag option tells Docker what to name the image. You can read more about creating and building Docker images here.After the image is created, you can move it to the cloud. You must have a project set up in your Google Cloud Platform dashboard and be authenticated before migrating the container. The following command migrates the image to Google Cloud Platform:gcloud build submitThe above basic commands will migrate a sample Python image, but full instructions can be found in the Google Cloud Platform documentation. 4. Initiating the Docker Push to Create a Google Cloud Run Python Script Once the Dockerfile has been uploaded to the Google Container Registry and the Python image has been created, it’s time to initiate the Docker push command to finish the deployment and prepare the storage files. A Google Cloud run Python script requires creating two storage files before a developer can claim the Kubernetes cluster and deploy it to Kubernetes.The Google Cloud Run platform has an interface to deploy the script and run it in the cloud. Open with the Cloud Run interface, click “Create Service” from the menu and configure your service. Next, select the container pushed to the cloud platform and click “Create” when you finish the setup. 5. Deploying the Application to Kubernetes The final step to schedule a Python script on Google Cloud is to create the service file and the deployment file. Kubernetes is common in automating Docker images and deploying them to the cloud. Orchestration tools use a language called YML to set up configurations and instructions that will be used to deploy and run the application. Once the appropriate files have been created, it’s time to use kubectl to initiate the last stage of the final stage to run Python on Google Cloud. Kubectl is a command-line tool that makes running commands against Kubernetes like deployments, inspections, and log visibility. It’s an integral step to ensure the Google Cloud run Python script runs efficiently in Kubernetes and the last leg of the migration process.To deploy a YML file to Kubernetes, run the following command:$ kubectl create -f example.ymlYou can verify that your files deployed by running the following command:$ kubectl get servicesExtra Credit The Easiest Way to Run Python In Google Cloud (Illustrated)  Running a Python Application on Kubernetes Running a Python application on Kubernetes Google Cloud Run – Working with Python

Categories:AI and Machine LearningApplication DevelopmentContainers and Kubernetes

The Basics of Natural Language Processing

How do we get non-humans to talk to us, translate text from one language to another, read and understand our documents, summarize large volumes of text rapidly, and give us answers - all in real-time? Because that's exactly what machines called Alexa or Siri does, or the conversational AI on Capital One that tells me the answer to my question (and often gets it wrong), or Google search engines and the like that not only use autocorrect to assist me with my queries, but also spit up responses that answer them. In the same category, there are AI Translators like Google Translate that instantly translates text from one language to another (I just hover my phone over the word and Google does the rest!), and plagiarism checkers like Grammarly for the editors of C2C to check whether this article’s plagiarized. (No fear!)It’s not much different from teaching children or ESL students to read and speak English, or any language for that matter.We do it through natural language processing, called NLP.C2C Event Alert: Interested in NLP? Keep up with the FullFilld story: Journey to Deployment and hear how they’re using NLP to build the their product. You’ll connect directly with the CTO,@mpytel, @YoshEisbart and development teams and can share your own expertise, provide feedback and learn how they’re overcoming similar challenges.  What’s Natural Language Processing? Natural language processing (NLP) was created in the 1950s through Alan Turing who sought to determine whether a computer could mimic human responses.  NLP is a two-step process. Scientists strip the training data to its rudiments for machines to work with. This is called “Data preprocessing”. Scientists then use one or other machine learning techniques to train the algorithm to understand and respond as required.  Here’s how it works. Phase 1: Data Preprocessing.Computer scientists break the text to its basics through the following steps: Segmentation The text is broken down into its smallest constituent units.Example:The sentence “Digital assistants are mostly female because studies show you’re more attracted to a woman's voice” gets broken into: “Digital assistants are mostly female” “Studies show you’re more attracted to a woman's voice”  2. TokenizingWe need the algorithm to understand the constituent words, so we “tokenize” those words.Example:“Digital assistants are mostly female”We isolate each word: “Digital”. “Assistants”. “Are”.  “Mostly”. “Female”. 3. Stop WordsWe eliminate inessential words that are only there to make a sentence more cohesive. Common examples are “and”, “the”, “are”.Example:In “Digital assistants are mostly female”, it would be  “Are”.  “Mostly”.Leaving us with:“Digital”. “Assistants”.“Female”. 4. StemmingNow that we’ve broken down the document and hacked it to its essentials, we need it to explain its meaning to our machine. We do that by pointing out that some words such as Skip+ing, Skip+s, and Skip+ed are the same word with added prefixes and suffixes.  5. LemmatizationWe also consider the context and convert the sentence to its base form in terms of mood, gender etc. (This is called “lemma”, or “state of being”). Common examples are  “Am”. “Are”. “Is”.Example:In ““Digital assistants are mostly female”, we tag the word “are” as Present Plural. 6. Speech TaggingHere’s where we explain the concept of nouns, verbs, adjectives, adverbs and the like to the machine by adding those tabs to our words.Example:“Studies (noun) show (verb) you’re (pronoun) more attracted (adverb) to a (preposition) woman's (noun) voice (noun)” 7. Named Entity TaggingWe introduce our machine to pop culture references and everyday names by flagging names of movies, important personalities or locations, and so forth that may appear in the document. Phase 2: Algorithm DevelopmentComputer scientists use different natural language processing methods to train the model to understand and respond accordingly. The two most common methods are: Machine learning algorithms like Naive Bayes to teach our models’ human sentiment and speech. Rules-based systems, namely human-made rules that scientists use to program algorithms. Example: Robots in Saudi Arabia get passports. IF AI Sophia lives in the Emirates. THEN she gets guaranteed nationality.  What is NLP Used For?Natural language processing (NLP) is used for a variety of functions that include: Text classification, where you teach the algorithm to recognize and categorize text. Example: Gmail with its Gmail Spam Classifier that filters spam email. Text extraction, where an algorithm is fed a quantity of material and asked to rapidly summarize it. Example: Google Scholar that summarizes quantities of academic research material. Machine Translation, where the algorithm is trained to translate spoken or written words from one language to another. Natural language generation, where an AI cobbles sense from random items. Example: automated journalism, where an engine scrapes the web for news and returns a summary in seconds.  Two Open Problems in NLPAs evolved as the field’s become, robots are still challenged in certain areas. These include: Context. Even the most sophisticated machines are challenged by ambiguous words. Example: You could tell an AI to meet you at the “bank” and they can go to the stream or to Wells Fargo. Likewise, you may tell the machine ‘You’re great!’, and the robot exclaims Thank you! When really you're frustrated - ‘You’re (grunt) great.’ The evolving use of language. The model needs to be dismantled to acquire updated language and trending expressions. Named Entity Recognition (NER). Recognizing names of “big shots” or famous companies is insufficient. Algorithms need to recognize items such as person names, organizations, locations, medical codes, quantities, monetary values, and so forth. Sophisticated vocabulary. To be super-helpful, NLP needs to acquire a broad and nuanced vocabulary, For most NLP software applications that are (at the moment) beyond their reach.   Bottom Line The wonder of natural language processing (NLP) is that these non-human machines are more intelligent and articulate than a regular random sampling of our human population. Their knowledge is immense, their linguistic skills incredible (the most sophisticated have mastered more than 100 languages) and their responses are mostly spot-on. They lack context, emotions, slang, and the like. That’s our instructional challenge, where Google Natural Language API is said to excel with that. On the other hand, some AI researchers believe they may never acquire this human-level cognition. They’re machines, after all.  Let’s Connect!Leah Zitter, Ph.D., has a Masters in Philosophy, Epistemology and Logic and a Ph.D. in Research Psychology.Extra Credit  

Categories:AI and Machine Learning

What’s the Difference Between IoT and IIoT?

All of us regular people are awash in a world of the Internet of Things (IoT). That’s where we, as consumers, use WiFi-connected devices to control the world around us. The Industrial Internet of Things (IIoT), on the other hand, works through smart sensors rather than devices and refers to industries: health care, retail, agriculture, government, and so forth. The ramifications are significant and have more diverse applications with a world-changing impact.  Internet of Things (IoT)In the broadest sense, the term IoT encompasses all the regular “dumb” things connected to the Internet, like smart toasters, attached rectal thermometers, and fitness collars for dogs. You use your internet-connected device, usually a smartphone, to “tell” the physically connected object how to act. For example, the device prompts the connected object to react when, where, and how you want it. It also feeds you real-time information on its physically connected object. Review some examples:Wearable devices and fitness trackers (e.g., Jawbone Up, Fitbit, Pebble). You program these accessories through the internet; they monitor your health. Home Automation (e.g., Nest, 4Control, Lifx). These internet-controlled applications monitor and control home features such as lighting, climate, entertainment systems, security, and appliances. Industrial asset monitoring (e.g., GE, AGT Intl.) is an internet-connected solution that remotely monitors and tracks your assets and facilities. Industrial Internet of Things (IIoT)Here’s where the world outside our doors uses the digitally connected world to feed it automatic and real-time reports on the safety, productivity, and economics of industries and their workers. Unlike IoT, communication comes to us through inherently programmed sensors rather than directly through our devices.Industry stakeholders use these smart sensors to receive immediate information on their assets that help them monitor, collect, exchange and analyze incoming data. In addition, entire cities operate off these sensors; they’re called smart cities. In effect, the whole developed world is one substantial Industrial internet of things since we’re all connected and interconnected through these sensors. Review some examples:Energy: water and sewage utility services rely on distributed but connected self-service water kiosks to gather real-time data on water quality. Health care: hospitals and healthcare institutions use networks of intelligent electronic devices to monitor patients' health status 24/7. The automotive industry: smart cars use sensors to “feel out” their environment and predict danger. Technologies That Fuel (I)IoTIoT and IIoT work through the following technologies:AI and ML that train these devices to respond as they do Cybersecurity for insulating their systems from attackers Cloud computing for storing their functionalities and data in cloud storage for scalability and security Edge computing brings their data storage closer to the actual location for faster response time Data mining that collates information on their experiences to prevent problems and improve their operations Pros and Cons of (I)IoTIt would be a sorry world without (I)IoT. Babies would be left crying; pets would be lost, thieves could more easily break into homes, more older people would die from falls, and so forth. That's as regards IoT. Now with IIoT, just think how many lives have been saved through heart and EKG monitors—products of IIoT. There’s also Amazon’s same-day shipping that’s achieved through IoT-programmed robots stocking shelves and loading trucks.On the other hand, IIoT can be extremely dangerous. All it needs is one malicious actor to crack one single endpoint of the system to place hundreds of thousands of lives at risk—or even to stall an entire country. Review some examples:Hijacking vehicles. Modern vehicles have an OBD II device that’s connected to the internet. So it’s difficult but not impossible for intelligent hackers to remote-hack these vehicles that include ambulances and planes and terrorize a nation.“Now I am become Death, the destroyer of worlds” - but also the creator of worlds aptly describes the ramifications of IIoT. Consequential! Related ConceptsOther terms that are slightly similar to (I)IoT are:M2M (machine to machine) communication, primarily used in the telecoms sector to refer to IP-transmitted data Web of Things that more narrowly relates to software architecture Industry 4.0. to name our ongoing revolutionary era of smart manufacturing and industrial automation Smart systems or Intelligent systems which use AI- and ML-trained innovations that help us manage and predict Pervasive computing for embedding computing into everyday objects that transforms them into intelligent thingsBottom lineThe rock-bottom difference between IoT and IIoT is that IoT is B2C (business-to-consumer), while IIoT is B2B (business-to-business). The first is user-centered, while the second deals with groups, communities, cities of people. As such, the second is more consequential than the first. Nevertheless, both categories provide valuable connectivity, efficiency, scalability, time savings, and cost savings for individuals and groups/ industries alike.When it comes to Google Cloud, its robust architecture provides IoT and IIoT operators with the tools they need to build the future.  Let’s Connect!Leah Zitter, PhD, has a Masters in Philosophy, Epistemology and Logic and a PhD in Research Psychology.

Categories:AI and Machine Learning