Learn | C2C Community

6 Cloud Trends to Watch in 2022

The following article was written by C2C Global President Josh Berman (@josh.berman) as a member exclusive for TechCrunch. The original article is available here. The past two years have been exciting periods of growth for the cloud market, driven by increased demand for access to new technology during COVID-19 and the proliferation of the “work-from-anywhere” culture. IT leaders worked to shift workloads to the cloud to ensure business continuity for the remote workforce, leading to skyrocketing adoption of cloud computing. This momentum is expected to pick up in 2022 and beyond. For many businesses, the pandemic accelerated their digital transformation plans by months, or even years. Reliance on cloud infrastructure will only continue to grow as organizations adjust to the hybrid work model. Gartner projects that global spending on cloud services is expected to reach over $482 billion in 2022, up from $313 billion in 2020. As we start the new year, C2C, an independent Google Cloud community, has identified six cloud computing trends to watch in 2022. More people are harnessing new technologies The pandemic inspired a new generation of entrepreneurs. Whether out of necessity from mass layoffs, a desire for a more flexible lifestyle, or finding the inspiration to finally pursue a passion, millions have started their own ventures. As their businesses grow and digitize, entrepreneurs across industries are embracing the cloud and adopting technologies like machine learning and data analytics to optimize business performance, save time and cut expenses. There are countless benefits to small businesses and startups. For one, the cloud makes data accessible from anywhere with an internet connection, enabling the seamless collaboration necessary in a hybrid work environment. Without having to spend on expensive hardware and software, entrepreneurs can invest in other areas as they scale their businesses. We often see founders leveraging the power and ease of use of Google Cloud Platform AI and ML tools to rapidly prototype and build applications. They’ve used this technology to create unique and exciting solutions, like tools that use ML to analyze English pronunciation or ML that predicts one’s mood from their breath. There’s an increased desire for more direct access to product developersAs more users shift to the cloud, there is an increased desire to connect and network with product developers who have worked behind the scenes to bring the latest technologies to market. Online communities, like C2C, make having these conversations possible and easy. These conversations ultimately help users secure the right applications they need and successfully deploy them to ensure operational success. Greater emphasis on security in the cloud Last year, businesses looked to the cloud to reshape their operations and become more agile. While cloud computing certainly offers the benefits of that flexibility and productivity, it also puts organizations at risk of becoming more vulnerable to cyber threats and data breaches. For that reason, security is going to become a larger part of the cloud conversation throughout 2022 and beyond. This reality is going to influence a greater emphasis on building more security into the cloud. As the world continues to go digital, organizations are being tasked with ensuring security within the cloud is properly integrated into evolving business models. More organizations will seek out data solutions Almost all enterprises operate in multi-cloud environments. As a result, a lot of valuable data is spread across systems, creating a need to make data accessible to more analytics tools. Cross-cloud analytics solutions are on the rise to help data analysts manage all their insights. At C2C, we’ve discussed the noticeable rise in the number of people and companies looking at data solutions, specifically BigQuery, Google Cloud’s fully managed, server-less data warehouse. These companies are typically mapping out their data strategy, but, interestingly, some companies that are trying to work with AI and ML realize they need a solution that makes their data consistent and easy to store. Productivity tools will become even more sophisticated When the world was forced into a remote work model overnight during the pandemic, many companies were not prepared for the challenge of immediately shifting their processes to a virtual format. The ongoing challenge for many companies that have transitioned to a hybrid model has been determining how to best keep both remote and in-person team members engaged. This opened doors for cloud-based collaboration tools like Google Workspace, which are only going to become a bigger part of our day-to-day operations. These solutions have capabilities like document collaboration, integrated chat features, virtual whiteboards and more. Much of that growth has already occurred: Nearly 80% of workers are using collaboration tools for work in 2021, up from just over half of workers in 2019, according to Gartner research. Not only are more companies going to adopt these cloud-based collaboration solutions, but the solutions are going to be enhanced and evolve as the needs of the hybrid workforce change. Cloud certifications are becoming more sought after by employers As industries accelerate remote adoption of cloud technologies, certifications and other IT credentials are becoming increasingly important and sought after by employers. And more IT professionals see the benefits of earning these certifications as well. More than 90% of IT leaders say they’re looking to grow their cloud environments in the next several years, yet more than 80% of those same leaders identified a lack of skills and knowledge within their employees as a barrier to achieving this growth. It turns out, the next big challenge for companies will not be how to manage cloud technology, but how to find enough qualified workers certified in it.

Categories:Google Cloud StrategyCloud OperationsC2C News

Monitoring and observability Drive Conversation at C2C Connect: France Session on January 11

On January 11, 2022, C2C members @antoine.castex and @guillaume blaquiere hosted a powerful session for France and beyond in the cloud space. C2C Connect: France sessions intend to bring together a community of cloud experts and customers to connect, learn, and shape the future of cloud. 60 Minutes Summed Up in 60 Seconds  Yuri Grinshteyn, Customer SRE at Google Cloud, was the guest of the session. Also known as “Stack Doctor” on YouTube, Grinshteyn advocates the best way to monitor, observe and follow the SRE best practices as learnt by Google in their own service SRE teams. Grinshteyn explained the difference between monitoring and observability: Monitoring is “only” the data about a service, a resource. Observability is the behavior of the service metrics through time. To observe data, you need different data sources; metrics, of course, but also logs and traces. There are several tools available, but the purpose of each is observability: FluentD, Open Sensus, Prometheus, Graphana, etc. All are open-source, portable, and compliant with Cloud Operations. The overhead of instrumented code is quite invisible, and the provided metrics are much more important than the few CPU cycles lost because of it. Microservices and monoliths should use trace instrumentation. Even a monolith never works alone: it uses Google Cloud Services, APIs, Databases, etc. Trace allows us to understand North-South and East-West traffic.  Get in on the Monitoring and Observability Conversation! Despite its 30-minute time limit, this conversation didn’t stop. Monitoring and observability is a hot topic, and it certainly kept everyone’s attention. The group spent time on monitoring, logging, error budget, SRE, and other topics such as:  Cloud Operations Managed Services for Prometheus Cloud Monitoring Members also shared likes and dislikes. For example, one guest, Mehdi, “found it unfortunate not to have out of the box metrics on GKE to monitor golden signals,” and said “it’s difficult to convince ops to install Istio just for observability.”  Preview What's Next Two upcoming sessions will cover topics that came up but didn’t make it to the discussion floor: If either of these events interests you, be sure to sign up to get in touch with the group! Extra Credit Looking for more Google Cloud products news and resources? We got you. The following links were shared with attendees and are now available to you! Video of the session Cloud Monitoring Managed Services for Prometheus Sre.google website SRE books Stack Doctor Youtube playlist  

Categories:Data AnalyticsDevOps and SRECloud Operations

Drive Your Team's Software Delivery Performance (full video)

This C2C Deep Dive was led by Nathen Harvey (@nathenharvey), cloud developer advocate at Google who helps the community understand and apply DevOps and SRE practices in the cloud.The Google Cloud DORA team has undertaken a multi-year research program to improve your team’s software delivery and operations performance! In this session, Nathen introduced the program, research findings, and invited Google Cloud customer Aeris to demonstrate the tool in real-time.Participate in the survey for the State of DevOps Report by July 2.The full recording from this session includes:(1:40) Speaker introduction (3:20) How technology drives value and innovation for customer experiences (4:20) Using DORA for data-driven insights and competitive advantages (7:15) Measuring software delivery and operations performance Deployment frequency Lead time for changes Change fail rate Time to restore service (14:20) Live demonstration of the DevOps Quick Check with Karthi Sadasivan of Aeris (23:00) Assessing software delivery performance results Understanding benchmarks from DORA’s research program Scale of low, medium, high, or elite performance Predictive analysis by DORA to improve outcomes (29:30) Using results to improve performance Capabilities across process, technical, measurement, and culture Quick Check’s prioritized list of recommendations (37:40) Transformational leadership for driving performance forward Psychological safety Learning environment Commitment to improvements (41:45) Open Q&A Other Resources:Take the DORA DevOps Quick Check  Results from the Aeris software delivery performance assessment Google Cloud DevOps 

Categories:Application DevelopmentDevOps and SRECloud OperationsSession Recording

8 Reasons Why IoT Operations (IoTOps) Is the Future of Developer Productivity

The Internet of Things (IoT) affects everything from street lighting, smart parking, air quality, ITS systems, IP cameras, waste collection, and digital signage. When IoT is managed, monitored, and maintained effectively, it changes everything from our cities to utilities, but we need to be on top of its challenges.These include: How do you connect thousands of IoT devices to back-office systems?  How do you manage IoT platforms from multiple vendors? How can you install and maintain the various IoT devices, some more complex than others? If you run a team, how can you guide those workers through step-by-step workflows and diagnostics? That’s where IoTOps, short for IoT operating systems, comes in. In our Google stratosphere, we’re given Fuchsia OS to use. Think of it as a cloud-based SaaS solution built specifically for IoT.  IoTOps integrates the data. IoTops helps you manage millions of IoT components, such as smart streetlights, traffic signals, power line sensors, garbage, parking, and air quality from one pane of glass. It integrates the various connected devices and with back-end systems quickly, easily, and efficiently. IoT management also includes the devices and the gateways that the IoT devices are connected to: a humongous enterprise. All step-by-step diagnostics can be managed and monitored from this one pane of glass for a finished product to reach smartphones and tablets.  IoTOps speeds up the progress.By making your workflow configurable, IoTOps help you manage the entire life cycle of your IoT operations. You have your IoT planning, inventory, installation, maintenance, and work orders in one place, making the process operational and fast. IoTOps simplifies IoT management.IoTOps tames your IoT explosion by helping you manage its escalating volume of data from one place. Not every project needs the same level of management or the same degree of care and attention. IoTOps brings to the forefront those facets that need special attention and help you design, regulate, and monitor your network performance from one pane of glass. IoTOps connects the workforce.The IoT operating system helps IT and operations work together, much the same way as DevOps does. With all data displayed in one crucible, engineers can stream data to diverse IoT applications and update all back-office systems (e.g., GIS, CMS, asset management, network management, CRM, and billing). At the other end, local plant technicians can use the OS to monitor and troubleshoot the industrial network. In the event of a device failure, technicians can work on rapid device replacement.  IoTOps gives you actionable results.IoT operations serve as an Analytics as a Service (AaaS) dashboard, giving you the insight to build on your IoT data for actionable results. Put another way, these operating systems provide you with visibility into your IoT projects’ inner workings and help you analyze the endless volume of data emitted from your connected devices.  IoTOps detects threats.IoTOps alerts you to anomalies and changes in IoT response time, characteristics, and behavior. It’s like an integration Platform as a Service (iPaaS), which standardizes how applications are integrated across the workflow. When differences are detected, it brings them to your attention promptly, so you can act on these cues instantly and prevent mishaps, such as network breaches. IoTOps saves human labor, costs, and productivity.IoTOps reduces downtimes by catching mishaps right away. Its timely intervention saves you the expense of fixing or replacing components. And its lean workflow does away with data drift, giving operations and data scientists the creativity and motivation to continue their work. You also don’t need to hire the many other specialists you would otherwise have required for deployment. IoTOps helps with incident management. The IoT impact on infrastructure and operations (I&O) can be significant, which is why it's crucial to catch mishaps in their beginning stages. An IoT OS helps you see the mass of connected facets and incoming data across environments, making the learning process error-resilient, stopping software from getting lost, and keeping your team on the same page. Very simply, IoT operating systems are platforms that help you manage, monitor, and maintain your IoT operations. Their value is immense. They allow you to complete important IoT projects from start to finish, where all components are configured precisely according to manufacturing network and security specifications with faster completion times. There are no more missing items or IoT elements that misbehave once they're configured in the network. Costly downtime and sunk expenses are a thing of the past since platforms like Google Fuchsia OS automate your IoT projects in a streamlined CI/CD process. Let’s Connect!Leah Zitter, Ph.D., has a master’s in philosophy, epistemology, and logic and a Ph.D. in research psychology.  

Categories:Cloud OperationsDatabases

9 Reasons Why MLOps Is the Future of Developer Productivity

MLOps—or machine learning (ML) and operations—is the equivalent of DevOps but with a significant difference. DevOps concerns itself with the working model of ML. It focuses on the actual software delivery cycle, working to close the gap between development and IT teams so they build, test, and release software faster and more reliably. MLOps aims to achieve the same results—in a data science and ML context. As the team at Google Cloud says, “the real challenge isn't building an ML model. The challenge is building an integrated ML system and continuously operate it in production.”For a successful ML model, several processes must be in place and continuously work well together, resembling the continuous, flawless, and high-quality delivery of an assembly line that produces expert results without fail. Here are nine reasons why it is essential for developer productivity.  1. MLOps make the ML process faster.  Since the ML process involves countless steps—from design to development, testing, and delivery—engineers need a function that cuts through the manual sluggishness and expedites the cycle. Without MLOps, the process is time-consuming, especially if the model was upgraded through different ML frameworks. Communication between other teams would also require diverse sign-offs and tedious back-and-forths, dragging out an already slow process to months if not years. 2. MLOps automates the ML process. A regular ML process would be highly manual, with code written from scratch with each use case. There would also be numerous bottlenecks, resulting in software getting stuck at any stage in the process and work stopped indefinitely. Software may never make it to the finish line. ML platforms that help you with MLOps can help you avoid bottlenecks by keeping all versions of the work documented, stored, and shared. Stakeholders set KPI benchmarks, and the project flows on to completion.3. MLOps creates repeatable workflows. MLOps allows custom-built steps to be reused, leveraged, and built on, not just by the author, but also by other data scientists from your team and organization. Just as DevOps shortens the production life cycles by improving products with each iteration, MLOps drives insights by shortening the life cycle between the ML training and development stage. 4. MLOps make the ML process error-resilient. The ML manual process is drastically error-prone with issues like a training-serving skew. The lack of coordination between the operations and data science teams leads to unexpected online and offline performance differences. Data scientists who work on ML need to know that the result matches their trained model in a real-time setting. For that, they need to have a streamlined CI/CD (continuous integration/continuous delivery) process, where there is a constant loop-back device between dev and ops, so engineers can improve the model and rapidly deploy.Such an error-prone process is fundamental in a workplace environment where you get new engineers all the time. A managed approach, achieved through MLOps, stops software from getting lost and keeps your team on the same page. 5. MLOps prevent fatigue. A manual ML process turns your energetic, promising crew of data scientists into frustrated and underutilized engineers who feel they're spinning in an endless Sissyphusian circle. MLOps does away with data drift, giving operations and data scientists the creativity and motivation to continue their work. You’re more likely to get promising insights and actionable results. 6. MLOps reduce bias. Sometimes, MLOps can guard against certain biases in their algorithms that, if undetected and corrected, can harm under-represented people in fields such as health care, criminal justice, and hiring. Overlooked biases in marketed software can also dent the company’s reputation and expose the company to legal scrutiny. 7. MLOps lead to actionable business value. Close the training-to-operations loop faster, and you turn ML insights into actionable results. Each stage of the process seamlessly connects with and flows into the next, workers of different teams collaborate, bottlenecks disappear, leading to productive outcomes. 8. MLOps helps you with regulatory compliance. The regular ML process is held accountable to a slew of government compliance and ethical obligations on data security, machine ethics, and data governance.MLOps frees your data team to focus on what they do best: creating and designing software, while MLOps allows your operations team to concentrate on the ins and outs of management and regulations. 9. MLOps facilitates team communication. Each team has its particular talents. Without MLOps, your operations teams would be unable to communicate with your data engineers, data scientists, software engineers, and vice versa, resulting in wasted human potential. There would be wasted software potential, too, with promising software designs and solutions held up in the deployment or some earlier stage, rendering them ineffective. Bottom Line Why do we need MLOps? Here’s what the engineers at Google Cloud say: With the long history of production ML services at Google, we've learned that there can be many pitfalls in operating ML-based systems in production.A platform for MLOps helps you shorten the system development life cycle and ensure that high-quality software is being developed continuously and delivered and maintained in production. Done well and consistently followed through, MLOps can be a game-changer for your company in that it eliminates waste, automates the ML cycle, and produces richer, more consistent insights. Let’s Connect!Leah Zitter, Ph.D., has a master’s in philosophy, epistemology, and logic and a Ph.D. in research psychology.There is so much more to discuss, so connect with us and share your Google Cloud story. You might get featured in the next installment! Get in touch with Content Manager Sabina Bhasin at sabina.bhasin@c2cgobal.com if you’re interested.Rather chat with your peers? Join our C2C Connect chat rooms!  

Categories:AI and Machine LearningCloud Operations

7 Reasons Why AIOps Is the Future of Developer Productivity

This was written by Leah Zitter, Ph.D. Problem: You've got this massive data flowing in from multiple sources—Google Cloud, your private cloud, Azure, AWS, or others—flooding you with noise. You simply don't have the time or ability to identify which alert is essential, which to overlook. And that's a pity because you may inadvertently miss something urgently, like an unusual spike in traffic, which could be indicating a possible cybersecurity concern. That's where AIOps—short for artificial intelligence for IT operations—comes in. These algorithmic operations help you combine ML with big data to troubleshoot and automate IT operations processes. AIOps Accurately Identifies the Root Cause  AIOps accurately identifies root cause in at least three areas: The correlation, or co-occurrence of events, is where AIOps helps you find the common root of several IT processes that are short-circuiting at the same time.   The topology, or the actual physical connections between items, is where AIOps helps you identify where things started going wrong in one or more items.   The clustered causes, where if you’ve got, say, a sequence of events or a cluster of similar events, AIOps helps you identify which of the causal events in this sequence or cluster caused the breakdown. These three points help you identify where and why things go wrong and rush mean time to detection (MTTD), which means AIOps simply enables you to detect the problem faster than running a manual configuration of the IT system.  AIOps Creates a Single Pane of Glass for Alert Data You've got data coming in across vectors, such as from Microsoft Azure, your native systems, VPN gateways, Amazon Web Services, and so forth. AIOps helps you cluster this mass of data on one platform.This makes things easier for systems specialists who simply need to visit alerts on one pane of glass to identify and resolve problems and automate solutions. It helps you see data across environments or, in other words, enables you to put the entire hybrid cloud in one place. AIOps Offers Intelligent Incident Management So you’ve got all this data coming in. What do you do with it? AIOps helps you assign the stream of incoming alerts into relevant groups to resolve the different issues.Example: AIOps assigns events that show similar patterns to the silo for IT operations management, events that show incident factors to IT service management, and so forth. Each cluster of events is then assigned to a relevant agent.This improves mean time to recovery (MTTR), helping the right person get the right work done faster. This operation automates your business to become better, faster, and more efficient. AIOps: Anomaly and Threat Detection AIOps helps us identify associations and helps us determine if something’s wrong in the first place. That’s anomaly detection. In other words, AIOps alert us to sudden changes of behavior or a sudden change in data. It looks at values over time and determines if some sort of abnormality is happening.Example: AIOps tell us if one of our systems is getting an unusual amount of traffic, indicating a possible cyber breach.  AIOps Forecasts Events AIOps helps with predictive analytics, where it uses data to help us forecast a behavior before it happens. Example: Unlike in traditional technology, where you hit your storage limit without warning, AIOps warns that (for example), “You’re 14 days from hitting 90% capacity.” Forewarned is forearmed. AIOps Resolves Issues Automatically Now that AIOps helped you identify the problem, you’re able to fix the issue with some sort of scripting or external orchestration (also called runbook automation) to prevent the issue from recurring. In other words, you automate the solution so that processes run faster and more accurately, without the need to reconfigure them each time something goes askew manually.  AIOps: Incident Management  AIOps logs a record of the troubleshooting incident, such as “The system could remediate this problem.” Or “We tried x, y, script and finally used z.” Such records could help the IT team fix similar disruptions cheaper, faster, and more efficiently. If that solution falls through, all you need to do is retrace your steps to explore alternative solutions.   AIOps for the Future World AIOps assigns incoming alerts to relevant IT containers, so the right agent can identify the problem, automatically remediate the issue, predict and prevent other adverse events from occurring, and log a record of the event for incident management. AIOps integrates information from multiple sources on one single pane of glass for a system administrator to read and interpret that information more easily. Put otherwise, AIOps helps you do everything from discovery to resolution and enables you to reduce the time it takes to troubleshoot events, so your business can quickly spring back to operations.In the World of the Future (that’s actually the world of the present), AIOps is the last word in your ability to adjust to unexpected and constantly changing IT environments. With the recent shift to remote work, AIOps helps us understand, troubleshoot, and automate IT processes across enterprises for competitive business value. It's this digitization that's the make or break of our company. Extra Credit Google Cloud introduces pipelines for those beyond ML prototyping Setting up an MLOps environment on Google Cloud Advanced API Ops: Bringing the power of AI and ML to API operations An introduction to MLOps on Google Cloud

Categories:AI and Machine LearningCloud Operations

C2C Deep Dive: Demystifying Anthos with Google Cloud's Director of Outbound Product Management

Originally published on December 4, 2020.In this C2C Deep Dive, product expert Richard Seroter aimed to build the foundations of understanding with live Q&A. Here’s what you need to know:What is Anthos?  In its simplest form, Anthos is a managed platform that extends Google Cloud services and engineering practices to your environments so you can modernize apps faster and establish operational consistency across platforms. Why GCP and Anthos for app modernization? Responding to an industry shift and need, Google Anthos “allows you to bring your computing closer to your data,” Seroter said.So if data centers are “centers of data,” it's so helpful to have access to that data in an open, straightforward, portable way and to be able to do that at scale and consistently. Hear Seroter explain how this can help you consolidate your workloads.   First-generation vs. second-generation cloud-native companies: What have we learned?  The first generation was all about infrastructure automation and continuous delivery (CD) mindset at a time when there wasn’t much research into how to make it happen. So some challenges included configuration management, dealing with multi-platforms, or dealing with security.Now, as Richard Seroter explains more in this clip, the second generation is taking what has been learned and building upon it for sustainable scaling with a focus on applications.   Is Unified Hub Management possible through Anthos for the new generation? Yep. Anthos offers a single-management experience, so you can manage every Anthos cluster in one place, you can see what they’re doing, but you can push policy back to them, too. You can apply configurations and more to make it easy for billing and management experience.  Serverless anywhere? You bet. Use Cloud Run for Anthos.  Building upon the first generation of the platform as a service (PaaS), GCP brings Cloud Run for Anthos as a solution to needing more flexibility and building on a modern stack. Besides being Richard Seroter’s favorite, it balances the three vital paradigms existing today: PaaS, Infrastructure as a service (IaaS), and container as a service (CaaS).Watch the clip to hear Seroter explain the how and the why. What about a GitOps workflow and automation—is scaling possible? Yes, by using Anthos Configuration Management (ACM), policy and configuration are possible at scale. You can manage all cloud infrastructure, not just Kubernetes apps and clusters, and even run end-to-end audits and peer review. Watch to learn how this works.  Question from the community: Are capabilities Hybrid AI and BigQuery available for Anthos on-prem?With Hybrid AI for Anthos, Google offers AI/ML training and inferencing capabilities with a single click. Google Anthos also allows for custom AI model training and MLOps lifecycle management using virtually any deep-learning framework.  Prefer to watch the whole C2C Deep Dive on Application Development with Anthos?    

Categories:AI and Machine LearningApplication DevelopmentInfrastructureGetting Started with Google CloudDevOps and SREHybrid and MulticloudCloud OperationsServerlessSession Recording