Browse articles, resources, and the latest product updates.
As organizations continue to adopt new digital practices and transfer to more cloud-native strategies, digital security becomes increasingly important. Cloud migration can help businesses achieve maximum productivity, but the bigger digital landscape that it provides also means more opportunities for cyber attacks. What is Cloud Security?Cloud security is made up of a wide variety of procedures, technologies, policies, services, and controls that are designed to protect cloud-based applications and systems from various kinds of attacks. There are three main categories of cloud security: Software-as-a-Service (SaaS): any on-demand application software that is ready-to-use and cloud-hosted. Infrastructure-as-a-Service (IaaS): back-end infrastructure that provides on-demand access to both physical and virtual servers for managing workloads and running cloud-based applications Platform-as-a-Service (PaaS): any on-demand access to a ready-to-use, cloud-hosted platform, primarily used for developing, running and maintaining various applications. The Shared Responsibility ModelSome organizations use a shared responsibility model for their cloud security. This model delineates security responsibilities between the customer and the provider to ensure more robust security and safer processes. The shared responsibility model establishes the responsibilities and accountability that:Are always the provider’s Are always the customer’s Depend on the service model Cloud Security ChallengesBroader Area of AttackComplex cloud environments with dynamic workloads require tools that must work seamlessly across any applicable providers and at scale. Because of the cloud’s ever-evolving landscape, risks of Malware, Zero-Day, Account Takeover, and other attacks are always a concern. Privilege ManagementGranting user privileges to those outside an organization or to those who have not been properly trained can lead to malicious attacks, data deletion, and other security risks. This makes it more important than ever to keep privileges organized and grant them only to those in an organization who need them. Compliance and LegalWhile cloud providers are backed by accreditation programs, it is still the responsibility of customers to ensure that their processes are compliant with government regulations. Because of the dynamic landscape that comes with cloud computing, this can become complicated Security That EvolvesZero TrustFirst introduced in 2010, Zero Trust is a principle according to which a system does not automatically trust anyone or anything outside an organization’s network and requires verification and inspection. Users who have access are confined to using only the tools and applications that they require. Furthermore, Zero Trust requires developers to ensure that any web-facing applications have the proper security. Security Service Edge (SSE)Zero Trust is an important part of SSE, which provides secure access to the internet and an organization’s private applications, as well as SaaS and cloud applications. This allows for more streamlined and robust security while also making costs more predictable and reducing operational overhead. The Pillars of Cloud SecurityTo ensure that there are no gaps in security between cloud-based applications and that security solutions can scale in a dynamic cloud environment, there are several best practices organizations should follow. Identity and Access Management (IAM)IAM helps to regulate access to tools and applications in cloud environments. This ensures that there are no users within the cloud who have access where they shouldn’t.Data Protection and EncryptionEncryption should be used for any and all transport layers, and all file shares should be secured. Good data storage practices should also be followed, such as terminating orphan resources and detecting and optimizing misconfigured buckets. Detection ControlsThe use of asset and configuration management systems and vulnerability scanners is beneficial for cloud security and offers a better view of the landscape, as well as any threats looming over the horizon. Anomaly detection algorithms also use AI to quickly detect unknown threats and determine the best course of action. Incident ResponseIncident response should be automated as much as possible. By automating responses to the most common threats and security breaches, IT teams can spend time working on more complex tasks that require human solutions. Learn more about cloud security from our community members today!
As technology-based companies rapidly grow and evolve, applications and software can often be left in the dust due to limitations on what can or cannot be optimized. These outdated tools can be a hindrance and make the customer journey more difficult. That's why it’s imperative that businesses employ solutions that are scalable and customizable while also ensuring that applications can be updated in a streamlined process to meet customer demands. What is Cloud-Native Architecture? Cloud-native architecture allows businesses to develop applications and software that are easy to update and maintain in nearly any dynamic environment. By employing the four cornerstones of cloud-native architecture, organizations can develop and support applications and software more efficiently.While cloud-native and cloud-based applications share similar characteristics, there is one factor that separates them. Cloud-based applications take advantage of the cloud and can function within its infrastructure but will find limitations when interacting with some cloud features. Cloud-native applications, on the other hand, are fully optimized for the cloud and are adaptable within the cloud’s dynamic environment. C2C partner Aiven provides a platform that cloud-native organizations can use to manage their spend and host their resources with optimal efficiency at scale. According to Mike von Speyr, Aiven Director of Partner Sales, "With public spend on cloud infrastructure expected to rise over 20% this year, it is important to remove the complexity from your cloud operations if you want to be truly cloud-native - and that's where Aiven comes in. Our data platform can operate all the major open-source data tools, including those that aren't native to Google Cloud such as Apache Kafka, Apache Flink, OpenSearch, and Cassandra, all within a single control plane, with the same reliability guarantees. We also offer a 'bring your own cloud' service that allows you to have the power of the Aiven Platform directly in your own accounts and this has proven to yield reductions in TCO of over 30%." The Four Pillars of Cloud-Native ArchitectureFor a cloud-native strategy to work optimally, it should include four main components. Each of these pillars plays a part in ensuring that updates and development run smoothly with little error. This increases efficiency for organizations while ensuring continuous quality. DevOpsSoftware projects usually consist of development teams that make changes and updates based on user feedback and operations teams, which resist change in an attempt to keep the software running smoothly and securely. This often leads to internal friction between teams and delays in product launches. DevOps is a process that focuses on optimizing the delivery and development of software by emphasizing communication between:Product management Software development Operations professionalsCommunication between these teams is made easier by automating and monitoring critical processes in the development cycle, including:Software integration Testing Deployment Infrastructure changesBy implementing DevOps in software development, organizations tend to see increases in code quality, testing accuracy, and predictability. Furthermore, an emphasis on automation ensures a decrease in human error from initial development to future updates. MicroservicesMicroservices are small, autonomous, and independently deployable services that run in separate processes yet still continuously communicate with each other. Microservices interact with each other over HTTP-based RESTful APIs, an architecture used for reading, storing and deleting forms of data. Because they work separately from each other, microservices can be tailored to any given load demands and can be scaled independently. This makes it easier for organizations to optimize specific components or areas where needed without having to worry about disrupting other systems in the process. ContainersWhen looking for a reliable and flexible way of moving software between different computing environments (from PCs to the Cloud, for example), containers are the best bet. Containers allow for seamlessly migrating software between different isolated environments, making it easy to move data anywhere or reproduce specific conditions throughout development processes. Continuous DeliveryTypically, traditional software development follows the waterfall approach, which sees updates released over long periods of time. Continuous delivery streamlines this process by ensuring updates can be delivered as rapidly as needed, sometimes several times within one day. Organizations see increases in development efficiency when using continuous delivery, as it is highly automated and allows for experimentation without the typical associated risks. Updates roll out faster, and human error is less of a concern.Dig deeper into cloud-native infrastructure by visiting our online community, or contact us today to learn about becoming a C2C partner!
Google Cloud, CB4, Resonai, Panorays June 14th, 2023 Tel Aviv At this 2Gather event, a panel discussion led by Carine Lev Lahav from Google Cloud focused on both solutions and challenges surrounding leveraging data in order to enhance growth for cloud enterprises. From data analytics to security, the event had an interactive question and answer session that covered a range of topics that involved innovating the cloud journey. Demi Ben- Ari from Panaroys, Natalie Rozenboim from Resonai and Dana Rosenfeld from CB4 provided insights on how they have used specific strategies to enhance the use of the cloud within their own organizations.Palo Alto Networks also shared the benefits of the Prisma Cloud platform. Prisma Cloud secures applications from code to cloud, allowing DevOps teams to collaborate in a secure enviornment. The major advantages of Prisma Cloud include that it has decreased training issues that occur due to the reliance on security tools from external vendors. The platform also helps to avoid friction between security and development teams with cloud-to-code protection. Prisma Cloud’s approach is rooted in going from point security tools to transparent visibility and protection in real-time.
Palo Alto Networks, AMD, High Radius, Automation Anywhere, Rackspace, Michaels Stores, Lytics, Ancestry INC, Workspot Cloud Adoption Summit Sunnyvale, CA April 27th 2023 For organizations far along in their cloud journey or just at the beginning, this full day summit offered insightful perspectives on all stages of interaction with the cloud. The Cloud Adoption Summit in Sunnyvale featured a variety of Google Cloud customers that shared their stories surrounding securing cloud adoption, cloud cost optimization, and how to leverage AI when migrating to the cloud. The program at this 2Gather also featured an interactive partner panel led by Jim Anderson, the Vice President of North America Partners at Google, focused breakout sessions that featured shared experiences and challenges from key leaders, and a panel led by Olga Lykova @OlgaLykovaMBA , Head of Go To Market at Workspot, featuring speakers from Intel, Salesforce, Google and LinkedIn. Dwyane Mann, CISO & VP Of AI, Fraud and Data at Michaels Stores shared a cloud journey story that focused on the different options an organization is presented with when migrating to the cloud. The session discussed the distinction between creating a new infrastructure and a conventional cloud migration. These differences include cost structure, design frameworks, and team dynamics. Key takeaways and lessons shared with the audience include:Conduct MVP’s, which is the method whereby only core features that solve specific issues are deployed. Identify core features and cost management strategies with leadership early in the process to reduce friction points in the future. Effectively label each stage within a cloud migration and its functionality. Involve more teams in the cost project to increase transparency.
Cloud computing has provided organizations in the tech industry with many benefits. Whether by helping to create a remote work environment or saving companies substantial costs, the cloud has transformed the ways businesses operate in the modern world. One of the major benefits of cloud computing technology is how it contributes to sustainability. The cloud reduces onsite activity associated with hardware and computing power consumption. Companies on the cloud do not need to maintain physical hardware or worry about disposing or recycling it. Cloud computing also eliminates the need to house and power an infrastructure. Not investing in physical IT equipment and consuming it as a service has environmental benefits because it helps to reduce the carbon footprint of major corporations. Below are some of the ways the cloud has helped contribute to sustainability efforts. Reduces Energy Consumption The National Renewable Laboratory confirmed that data centers consume 1.8% of the overall energy consumption in the US each year because onsite servers need to be powered by large amounts of electricity. Data centers require significant upkeep and maintenance and power supplies and cooling fans are also needed, which use a lot of electricity. Not only does the electricity provide power for servers, but it is also used for cooling fans when servers overheat. Research funded by Google and conducted at the Berkeley lab found that cloud computing software can provide up to an 87% decrease in electricity usage. The energy savedx is so great that it could be used to power Los Angeles for an entire year, and can help businesses save 60-85% in energy costs. Decreases Greenhouse Gas Emissions Sustainable computing decreases the amount of GHG emitted from data centers. GHG are created in data centers through the life cycling that occurs. This includes: producing materials for the equipment, assembling it, using it within the data center, and then disposing of it once the life cycle is complete. A survey conducted by Accenture revealed that cloud computing has a large environmental impact on carbon emissions. By using green computing approaches, large companies can decrease their carbon footprint from up to 30% and smaller organizations can decrease it to 90%. A CDP report also came to the conclusion that offsite services can reduce annual carbon emissions by 85.7 million metric tons. Dematerialization Dematerialization refers to the replacement of physical products that create greenhouse gas emissions to a virtual equivalent or replacement. Cloud services reduce energy consumption as well as e-waste, and encourage organizations to conduct their businesses virtually. Examples include video conferencing and sharing documents rather than printing multiple copies to distribute to a large number of employees. Using green computing architectures results in fewer physical machines and hardware, which means having a lower environmental impact. The cloud allows businesses to focus on their daily tasks without having to worry about IT and maintaining infrastructure that they would have onsite. By reducing physical products, such as equipment and hardware, cloud computing reduces the amount of e-waste that occurs when disposing of these products. As mentioned, it also helps organizations to go paperless with cloud storage options, such as Google Drive. Utilization RatesOn-prem companies use their own private data centers, which means the equipment is purchased and then set up to deal with high usage spikes, which results in lower utilization rates. When the hardware isn’t being used, energy is still being used which negatively impacts the environment. The cloud decreases the amount of machine use that an organization would require, translating to higher utilization. Since cloud servers do not require physical equipment, they tend to be 2 to 4 times more efficient because of a strong infrastructure. Hardware SpeedData center hardware is used for long amounts of time before an upgrade or a replacement due to high costs. Because of their higher utilization rates, public cloud servers have a smaller lifecycle, which creates a quicker refresh time. It is more cost efficient for public cloud servers to upgrade on a regular basis, because new technology has higher efficiency rates. The more efficient hardware is, the less energy it will use in the long run. In conclusion, cloud computing services operate with higher resource (or energy) efficiency compared to traditional data centers. Sustainable computing is rooted in the concept of sharing services and hence maximizing the effectiveness of resources. Creates a Shift to Renewable Energy SourcesCloud data centers have also shifted to renewable sources of energy to power their operations, reducing their carbon footprint. By using solar, hydropower, and wind, companies are attempting to generate more eco-friendly electricity. Cloud providers are currently working towards creating more renewable energy options. Helps with Remote WorkingEmployees of an organization no longer need to be in an office, they can work from anywhere, at any time, at any place.The cloud allows employees to share and distribute documentation through its server. Working collaboratively is now possible because of cloud computing as employees can now share company information in a secure and safe environment. Secure data storage and management allows for a smooth work environment for both employees and the business. Workers no longer need to go on long commutes, decreasing the amount of pollution and e-waste that is occurring within the environment. Rackspace Technology and Green Computing Rackspace Technology is a cloud computing services pioneer with multi cloud solutions across apps, data, and security. The platform recognizes that it is time for enterprises to prioritize sustainability, and IT leaders must take the lead in driving sustainability efforts. Rackspace’s Chief Architect, Nirmal Ranganathan, states that “Cloud computing, green computing architectures, design principles, and understanding the data around energy consumption are all critical components to modernize an IT industrial complex skills, infrastructure, and applications. For a more sustainable IT future, the key to sustainability efforts is in the data. Differing perspectives about business pledges versus their actual performance is ongoing and constant. Often, the former is more ambitious than the latter. Data will play a vital role in the drive toward net zero to verify claims made by business partners and to accurately measure a business’ own carbon footprint.” Rackspace focuses on empowering companies to grow using technology. Through the delivery of cloud services, building efficient serverless architectures and embodying design principles focused on driving long term value creation, Rackspace Technology is contributing to an overall reduction of energy consumption and building Sustainable IT. Migrating and modernizing companies to Cloud environments helps to reduce the number of over-provisioned, energy-hungry data centers across an enterprise.
FinOps—or cloud financial operations—is the practice of embedding financial accountability, management, and cost optimization within engineering teams. FinOps is growing in importance and relevance among financial and engineering teams alike.A recent Forrester report predicts that over 90% of organizations are overspending on cloud costs due to several factors, including lack of skills and over-provisioned resources. CIOs, business leaders, and the media are all talking about FinOps in 2023. Interest in FinOps is growing at an unprecedented rate, especially given recent economic fluctuations.C2C had the opportunity to host Sasha Kipervarg and Ken Cheney to bring their perspective about FinOps to the C2C community. Sasha is the co-founder and CEO of Ternary, the world's first native FinOps cloud cost optimization tool built for Google Cloud on Google Cloud. Ken Cheney is a business leader and advisor to SaaS and cloud vendors, large enterprise, and the government. During this informal chat, you will learn about the importance of: Accountability and enablement of cloud optimization Implementing this optimization via policies and useful resources (such as Ternary) that can help you: tag resources (every workload, every cluster) track untagged resources track the biggest spenders and stay on top of the fluctuations in these costs Reporting (the earlier the better) for aggregating spend and arriving at a total cost for various teams Setting reporting to make planning and forecasting easier.Watch the full recording here:
Please introduce yourself: My name is Clair and I’m a Senior Program Manager at Vimeo. I work with C-suite, multi-country product leads, and PMO managers to produce meaningful organizational change. I also deliver business critical initiatives at an enterprise scale and my expertise is in digital transformation, process redesign, and revenue optimization. I currently live in Manhattan with my husband, and I’m originally from Korea. You have a diverse background from working in design to consulting, how did you find yourself in tech? When my father got third stage cancer, I was 17 and education was a luxury. It took away my childhood dream of becoming a lawyer, and I had to work 2 jobs to be the breadwinner of my home. When I was 19, I submitted a 1200-word article to the Korea Times' Thoughts for Times section. The Korea IT Times Editor in Chief took notice of my work and scouted me as a reporter while I was taking online University classes. I learned from the world’s tech leaders and served as a media partner to over 200 IT companies to promote their products and services online. I promised myself that once my baby sister graduated college, I’d pursue a Master’s program with my own savings, and it took me 10 years to reach that goal. From all the schools I applied to, Parsons School of Design offered me a merit-based scholarship, and I chose to study strategic design and management, allowing me to dive deep into different methods of design thinking and managing creative work. This time in my life came with a few different challenges, as I applied to 320 companies and revised my resume 221 times. However, each time I received a rejection letter, I’d dissect the job description and dedicate myself to self-improvement. New knowledge and certifications led to the project opportunities with Nike, Delonghi, and Toyota’s design and engineering teams. In 2018, PWC gave me an opportunity by the time I gained 6 certifications, including, The Wharton School Financial and Business Modeling, PMP, CSM, and Google Adwords. By the time I completed the Lean Six Sigma course, I was led to the tech industry, Vimeo. How has Google Cloud made an impact on you? Google Cloud has always been my amplifier. In 2022 at Vimeo, the principal engineer of the hosting Ops team designed a new container solution with our Core Services and Video Platform team to cost-effectively store large video files at Vimeo. We selected Google to be our partner for this endeavor when we were using multiple regional storage solutions. Through the Google Cloud STS service, we migrated large legacy video files into a new bucket safely in less than 3 months. The principal engineers of Google partnered with our video platform, core services, and hosting ops teams to assess risks and proactively manage them. The success of this complex project, in partnership with the best teams of both parties, resulted in substantial cost savings. A shout out to Dave Stoner’s team at Google! Additionally, in 2005, I was a reporter at the Korea IT Times. We had a competitive advantage for being the nation’s first English IT specialized online/offline journal. However, the business couldn’t sustain itself with new technological advances. When my paycheck fell behind, I suggested to the CEO to redesign our website to meet Google News requirements. At that time, we had to rebuild the entire website for our English content to syndicate to Google News. We did this for the first time in Korea, and what this meant was an increase in sponsorship by 200% and revenue by six folds. When our media partners exhibited at the international fairs, they could share the article link in the follow up email rather than distributing the paper kits. Google has always been a powerful tool in my life and has been a driving force to help solve critical issues. What does being a leader mean to you?I think sometimes I struggle to define that myself. It used to be about “Am I doing enough for others? Am I dedicating enough time for them?” I thought those were the qualities of a leader. When I think about the term today, I believe it’s rooted in company growth. The challenge here is to be the force of nature as a leader that can empower others to reach their own destiny while also balancing the needs of a team. I think rather than always being the person who relies on facts, guidelines and analysis, I’m learning to embrace my natural feminine identity in the progress of striving for effective communication. When faced with a challenge or obstacle in life, how do you handle it? To be honest, being in the United States has really helped. Failures and obstacles are viewed as a part of the journey rather than a form of shame. However, in the culture I grew up in, mistakes were viewed harshly. As an immigrant from a different country, I struggle with questioning myself and my expectations. When this happens, I turn to music or running. A recent hobby of mine has been writing TV show scripts, and I realized that writing helps me to look into the bad moments of a day from a bird’s eye view. It’s very therapeutic and helps me to understand that whatever is happening is just a part of season one. If you could go back and give your younger self advice, what would that be?I would tell myself that when there is a will, there's a path. When I was younger, I always had a will but keeping faith was a challenge. Life felt giant, and everyday felt like I was never excellent enough to become successful, when really, I didn’t have a definition of success. I’d tell myself to create a vision, get credentials, and never stop learning. In my case, every time I was about to give up, someone always found me and led me closer to my ambition, and they took notice of my track record dedicated to continuous improvement. I believe you shouldn’t stop being an eternal student. Continuously seek wisdom through knowledge, and have faith that the perfect award awaits you. How would you like to see organizations celebrate female talent?I was recently very inspired by an event titled “I Am Remarkable.” Culturally, we grow up hearing that modesty is the best virtue. Especially when you are a woman, the better job security is there when we nail the back scene supporter role. It was an emotional event for me to witness because these amazing women celebrated their wins from small to big, and were being vulnerable while also empowering each other. It made me want to create more time and space to participate in these events to nurture my own confidence so that I can be more comfortable in my own skin. The remarkable women were building a strong community by recognizing greatness in others, and I'd love to belong to more communities like this to inspire meaningful changes in the world. What is your favorite aspect of working with other women?Women together are like “stars aligned” in my perception. I once belonged to a certain type of culture where men with a higher title would serve the role of a “hero” of a team. We have great female leaders that joined Vimeo from Google and Amazon for our key product areas. Our presence helped mixed groups at Vimeo to shine brighter together. Sometimes I imagine us looking like a Saggitarius together, other days like an Aquarius––here when the stars are aligned, we constantly ask each other what can be done by the work function or at the leadership level to remove impediments and overcome any limitations for the simple mission: enable the power of video. We bring balanced perceptions, empowerment, and strong will to accomplish our mission together. Who are your role models? Currently, the CEO of Vimeo, Anjali Sud, and CFO Gillian Munson have deeply inspired me. I’ve never seen such strong leaders who empower us with smart management who are also furiously vulnerable with us and display humility. This is the first time in my career where I am working for or with a female C-Suite. Recently, when Anjali spoke in our Town Hall that times like this define who we truly are and how together we can become stronger, I dearly missed my Japanese grandmother. She was devoted and positive throughout all crises, including post-war family loss and rebuilding. She has been a true role model in my heart. For young women going into the tech space, what advice would you give them?First, you need to find and understand your interest, then connect it to tech areas where you could make an impact. I’d recommend researching what you have to accomplish in terms of credentials to get into that market, and I’d also narrow down the search to areas that you’d be interested in learning more about as well. There are many online courses available that will provide you with a glimpse of University professors teaching different topics that will help you strengthen that interest. Once you’ve narrowed it down, take a look at the job descriptions in that field, because it is like a cheat sheet for where you want to go! Dissect the requirements and see which ones you can tackle currently and map out the ones that you can achieve in the future. There are small, tactful hints you can catch in job descriptions that are quite actionable now and will make you feel like you’re working towards your end goal. Dream on! Want to read similar articles? Check out these other interviews with women in Cloud: Women in Cloud: Meet Shobana Shankar
Cloud cost optimization is the process of minimizing resources without impacting performance and scalability of workloads within the cloud. Cloud cost best practices are rooted in determining methods to eliminate costs by identifying unwanted resources and scaling services accurately. There are many external factors including inflation and a changing labor market that force businesses to restructure financial priorities.Though many models of cloud computing offer flexible payment structures and pay-as-you go methods, cloud cost optimization strategies will allow businesses to tighten their grip on controlling resources. Additionally, cloud cost optimization tips will also highlight if the amount of resources being used are in alignment with the infrastructure and business goals of an organization.The following are strategies that will help run applications in the cloud at lower costs. 1. Eliminate Resources One of the most simple, yet effective cloud cost-saving strategies is to eliminate resources that are not fully benefiting a business. For example, users may allocate a service to a workload that is temporary. Once the project is complete, the service may not be eliminated instantly by an administrator, resulting in unwanted costs for the organization. A solution would be to examine the cloud infrastructure and look for servers that are no longer needed within the environment if they aren’t serving business needs. Cloud cost optimization strategies are not just about eliminating spending but also ensuring that costs are in alignment with an organization’s objectives. If a particular server or project is no longer serving a business, eliminating this resource will be beneficial as it enhances cloud infrastructure optimization.This can be accomplished through routine scanning and testing to identify resources that are idle. 2. Rightsize Services Rightsizing services is allocating cloud resources depending on the workload. By allocating resources, rightsizing allows users to analyze services and adjust them to the appropriate size depending on the needs of a business. By evaluating each service task and modifying the size until it matches a specific need, cloud computing services will maximize capacity at the lowest possible cost, resulting in cloud cost reduction. In addition, many businesses rely on vendors to deploy cloud resources when they do not understand operational goals. The solution to this problem is to develop rightsizing approaches that are customized to your business, strengthening cloud resource optimization. Customized approaches develop transparency by creating a clear view of what resources are needed for your specific cloud infrastructure. Rightsizing services will also assist with analyzing the volume of certain metrics that are being used and can inform business decision makers to either upgrade or terminate specific services. 3. Create A Budget Develop a clear budget between engineers, product managers, and other team members in regard to utilizing cloud computing services by setting a monthly budget rather than an arbitrary number. Building a culture that is rooted in transparency and cost awareness will also influence how users utilize cloud services. 4. Monitoring Cloud computing platforms may have some small incremental changes when it comes to pricing. However, users should keep an eye out for any unexpected spikes that may impact cloud spend optimization and overall spending. A solution here would be implementing an alert when cloud computing costs are going over the budget. Detecting the root of these large increases and analyzing the cause can also ensure that overspending on this particular factor will not occur again, allowing for stronger cloud cost control. 5. Select Accurate Storage Options Organizations need to consider many factors when selecting an appropriate storage option. Performance, security needs, and cost requirements are all components that should be taken into consideration when selecting an appropriate storage model. Selecting a storage tier that is user-friendly and is also aligned with a budget is critical to cloud cost efficiency. Storage tiers that are underused should also be removed for cloud cost reduction purposes. 6. Use Reserve Instances (RI’s) If an organization is using resources for a specific amount of time, consider purchasing a cloud cost optimization tool, such as a reserved instance. These are prepaid services that are discounted and are similar to saving plans that are ideal for steady workloads that have a clear timeline. When purchasing an RI, the organization selects the type, a region and a time frame that may vary depending on the purchase. 7. Manage Software License Costs Software licenses can often have high costs and monitoring them can be challenging in regard to cloud cost management. There are often forgotten fees that are associated with licenses and many organizations face the risk of paying for licenses that they have stopped using. Conducting a thorough software audit will not only help you to understand what software is being used within the business, but it will also demonstrate what software is critical and what licenses are no longer needed. 8. Define Clear Metrics Highlight what metrics are most important to your organization. Metrics, such as, performance, availability, and cost can also help to create reports and dashboards that outline activity in the cloud. Major cloud providers have a process whereby metrics are tagged which allow an organization to create a detailed report that provides insight on cloud cost analysis. These reports should be used to track spending as they outline trending patterns in regard to finances. 9. Schedule Cloud Services It is common for organizations to have services that are idle and not being used during certain times of the day. Reduce spending by scheduling services during specific time slots in order for them to be fully used. A duty scheduler tag can be used, and the scheduled services will then be implemented. Leveraging a heatmap can also help to establish when services are being underused in order to determine an effective scheduling arrangement. SADA, an organization that serves as a cloud consultant and helps other businesses in their own cloud journey, recognizes how effective this strategy can be. SADA’s Director of Customer FinOps, Rich Hoyer, states that “Of these strategies, we have found that scheduling cloud services’ runtimes are often one of the largest overlooked savings opportunities we encounter. Specifically, non-production workloads, such as testing, development, etc., are commonly left running full-time, 24/7, instead of being scheduled to run only when used. The potential savings of running those workloads only during business hours are often surprisingly large, and they can usually be realized via simple automation and modest revisions to maintenance schedules. The first step is to analyze exactly what is being spent on these resources during the hours they sit idle. The number is often large enough to quickly motivate the implementation of a workload scheduling regime!”
An engaged audience eagerly listens as Sanjay Chaudhary, Vice President of Product Management at Exabeam explains how hackers are able to use MFA bombing to hack employee emails in order to gain confidential company information. This is one of many topics surrounding data optimization discussed at the 2Gather event in Sunnyvale, California on February 3rd. “Not coming from a technical background, I wasn’t sure what to expect at my first event. However, the panel’s rich and engaging narrative made data security into an amazing story to listen to!” said June Lee, Senior Program Manager at Workspot. The first C2C event of the year embodied the essence of forming meaningful connections. At the beginning of the event, all attendees were asked to introduce themselves to two other individuals they have not spoken to. This created a strong sense of openness and going beyond comfort zones to spark personable interactions. Through peer to peer conversation, guests connected on driving advocacy and feedback surrounding how to use Google Cloud in regards to data analytics. The event was composed of a diverse panel of Google partners including NetApp, Exabeam, Lytics as well as Cisco systems. “Everything starts with a customer,” stated Bruno Aziza (@BrunoAziza), the Head of Data and Analytics at Google. This approach is the driving force behind Google building close relationships with their customers, understanding their journeys and what challenges can arise, one of these being receiving value from data that has been collected. “A large amount of organizations are struggling to turn data into value and money is being spent on data systems, yet companies are not always benefiting from it” says Bruno. Organizations now have access to large sets of data, however, critical pieces of data are not typically within their internal environment. A step in the right direction is to create data products that assist with tackling this issue. One of the major keynote speakers, Vishnudas Cheruvally, Cloud Solution Architect at Netapp provided insight on solutions that the organization is working on. “One of the main goals of Netapp is to build an environment that is rooted in trust and to create an infrastructure where users do not have to worry about basic tasks associated with optimizing data,” says Vishnudas. Through billing API’s and resizing data volume with Google Cloud services, customers have accessible tools that allow them to make informed decisions. This includes creating a customized dashboard to observe what is happening within their environment. Along with data optimization, emerging global trends and the impact it has on data sovereignty was also a recurring topic that captivated the audience. “Data sovereignty and upcoming global trends within data security were key topics discussed at the event and are also motivating factors of solutions developed by Netapp,” stated Vishnudas. “Everything starts with a customer.” “An emerging trend is using excessive resources through multiple clouds and essentially creating a wasteland,” says Jascha Kaykas-Wolff (@kaykas), President of Lytics. This conversation sparked the topic of global trends, data sovereignty and cloud strategy. With high amounts of data being stored by organizations, questions begin to arise in regards to ownership. “Data has to live in a specific area and there has to be control or sovereignty over it,” says Jascha. The panel engaged in a conversation that covered dealing with shifting global trends and how it impacts customers. Sanjay Chaudary brings in a product management perspective, which is rooted in solving customer problems. “With more regulations being created, data cataloging is essential in order for customers to understand what is critical in terms of their data and security threats. The core principle of data is the same, the most important thing is being able to detect a problem with the data and how fast it can be addressed.” says Sanjay. From ownership to data security, the discussion highlighted a variety of fresh perspectives. What stood out amongst guests is the diversity of the panel that brought in differentiating views. “The event had extremely thought-provoking insights stemming from the issues of modern day data analytics and how it impacts a customer base as well as a panel that discussed their personal experiences with data,” said Dylan Steeg (@Dylan_Steeg), VP of business development at Aible. Both speakers and guests then attended a networking session following the event. Over refreshments and drinks, guests were able to mingle with one another to further expand the conversation. Most importantly, they were able to create meaningful connections. Connections that may lead to future collaborative efforts as well as identifying solutions that can take data optimization to new heights.You and your organization can also build these connections. To start, join C2C as a member today. We’ll see you at our next 2Gather event! Extra Credit:
When organizations talk about the “cloud”, they aren’t referring to that white ball of fluff in the sky on a nice day. The term “cloud” refers to a network of servers made up of information, software, and applications. Cloud computing is defined as the delivery of all of these components over a network or internet connection. There are distinct cloud computing technology services and cloud deployment models, as well as many cloud computing benefits for businesses. Cloud Computing Services Internet as a Service (IaaS)IaaS, also known as “infrastructure as a service,” is where third-party cloud computing providers offer computing infrastructure, such as networks and storage hosted in a virtual environment, so that any user can have access to it. It is primarily owned by the service provider and is usually accessed on a pay-as-you-go basis, making it cost-efficient for organizations. Additionally, IaaS is an impactful approach for projects or work that is temporary and subject to drastic changes. An example would be a company testing a new product and wanting to stay within a flexible budget. Platform as a Service (PaaS)Platform as a service is defined as complete deployment of the cloud. This service relies on the cloud provider for tools and infrastructure, providing developers with an environment that is already highly supported when creating apps. This allows developers to better use their time as it reduces the amount of code that developers must write themselves. Overall, the cloud provider supplies the infrastructure, including the network, storage, and middleware, and the developers simply select the environments they want to build or test in. Software as a Service (SaaS)Software as a service is where users access application software through a web browser or desktop system. It is a licensed model, as the software is provided through a subscription basis and the cloud computing infrastructure is delivered to end users through an internet browser. Major advantages of this service include its affordability, due to the subscription process, and convenient maintenance, as the provider supports the environment. Cloud Computing Deployment Models Cloud computing deployment models define how a cloud platform is set up as well as which users have access to it. There are four main types. Public CloudThe public cloud model is the environment in which resources are owned by a cloud computing provider and can be accessed by multiple organizations. Users may differentiate in terms of data and applications that are being used. However, they are all accessing the infrastructure from the same provider. Public cloud offers users scalability, as the provider is responsible for maintaining the infrastructure and any updates associated with it. This allows companies to cut cloud computing costs, as they do not have to invest in an entire IT team to operate the cloud computing infrastructure. Private Cloud The private cloud is a deployment model whereby the infrastructure is dedicated to a specific user, making it a single tenant environment. Cloud computing is hosted privately within the organization’s individual data center and cannot be accessed by other users. This model provides an extra layer of cloud security by restricting access, and is created by virtualization technology. Cloud storage resources are combined from physical hardware into pools that can then be shared. A layer of hardware is then added that keeps it separate from any other user’s infrastructure, enhancing cloud computing security. Hybrid CloudA hybrid cloud model uses a combination of one private cloud and one public cloud by managing workloads through both environments. It is operated through hybrid cloud management tools, which assist in both environments, operating in sync depending on the needs of an organization. This is accomplished through a function called cloud bursting, where workloads that are normally hosted on site or within the private cloud are expanded by the public cloud to meet the dynamic needs of the user. Multicloud Multicloud environments allow an organization to use at least two providers for cloud computing. They can involve various combinations, such as two or more public and private clouds. Companies can then utilize multiple cloud computing providers based on business needs or their strategy with regard to cloud computing. A multicloud solution is rooted in being accessible across the cloud computing infrastructure. It combines multiple providers, including SaaS, IaaS, and PaaS to form the architecture. This cloud model provides high flexibility as users are not tied to one vendor and can select cloud-based services from various providers based on their goals. Advantages of Cloud Computing Solutions Flexibility Organizations can choose what type of cloud deployment model or approach works best for fluctuating needs and workloads, providing a strong sense of flexibility. Whether an organization requires extra bandwidth or cloud storage, they are able to scale their needs and work within their budget in regards to cloud computing. CostCompanies on the cloud do not have to invest in their own hardware or equipment, reducing their cloud computing costs and overall spending. The maintenance and upkeep are the responsibility of the provider, which saves an organization resources. In addition, businesses can use the pay-as-you-go model, allowing users to work with cloud computing within a budget. Data Security Cloud computing security features provide an extra layer of protection to stop breaches before they happen. There is often baseline protection for data, including authentication and encryption to secure information that is confidential within the cloud. This creates an environment where companies can work with confidential data and workloads. Scalability Scalability in cloud computing allows the user to increase or decrease their resources in order to adapt to shifting priorities. Certain needs, such as cloud data storage capacity, can be scaled through cloud computing infrastructure, which is beneficial when organizations experience sudden changes. Deployment SpeedCompanies are able to experience the benefits of cloud computing with just a few clicks. Fast cloud deployment reduces the time individuals and teams have to use to access resources, while simultaneously decreasing the amount of work that is required, such as maintaining or updating a database. Collaboration Cloud computing allows employees within a business to deliver and share corporate content at any time with any device, promoting a collaborative environment. For example, cloud computing tools support changes within data or documents. Users will automatically receive changes in real time, ensuring that employees have access to the most recently updated version. Cloud Computing In the Real World Workspot, an organization providing a SaaS platform that delivers Windows 10 Cloud PCs to devices, applies cloud computing technology to many of their daily operations. One cloud computing use case that delivers fast time-to-value and high ROI for the organization is end-user computing (EUC). Most enterprises are rethinking their end-user computing strategies and looking to the cloud to modernize. Key drivers for EUC modernization initiatives include:Hybrid and remote work is now mainstream. IT teams must be able to flexibly provide the right resources to end users, and then adapt quickly when requirements change. Persistent supply chain issues continue to limit access to new hardware, so reusing existing hardware and switching to low cost endpoints is important. Budget constraints in a tough economic environment require creative solutions and innovation. SaaS solutions are strong contenders for lowering IT costs. An ever-changing threat landscape is challenging IT and risk management teams to examine zero-trust security policy from every angle. What Does EUC Modernization Look Like In the Real World? Workspot CEO Amitabh Sinha says, “Leveraging Cloud PCs can provide organizations with the scalability and cost efficiencies they need to mitigate the major pain points their users face. Replacing a physical PC with a Cloud PC provides secure access from any device or browser while maintaining high performance, total security, and the best user experience. Cloud PCs also future-proof end-user computing, so organizations are ready for the next technology wave––and the next business upheaval. This is why we are seeing end-user computing modernization initiatives across industries.” Extra Credit:
On November 10, 2022, C2C returned to Google’s offices in Chelsea, Manhattan for a 2Gather event all about intelligent automation. The robust event program included a fireside chat with representatives of Granite and Becton, Dickinson, and Company moderated by C2C partner Automation Anywhere, a presentation from partner Palo Alto Networks, a conversation between partner Workspot and their customer MSC, and a panel featuring the speakers from MSC, Workspot, BD, and Granite. Google’s Drew Hodun introduced and moderated the event program, but the majority of the content was driven by the participating customers and partners and the guests in attendance with questions and ideas to share with the speakers and with one another.After a hello and a word on C2C from director of partnerships Marcy Young (@Marcy.Young) and an opening address from Drew, Ben Wiley of Automation Anywhere introduced Paul Kostas of Granite and Nabin Patro of BD. and offered some background about Automation Anywhere’s mission to build digital workforces for organizations that need them, with a particular focus on business processes like data entry, copy and paste, and parsing emails. Ben also mentioned Automation Anywhere and Google Cloud’s joint solutions for office departments like contact centers. Paul made a point of shouting out solutions like AA’s Automation 360 and Google Cloud’s Doc AI, which Granite used to build 80 automations in 9 months, and Nabin touched on how automation helped manage some of the work that went into BD’s manufactured rapid diagnostic test kit for COVID-19. “The technology is forcing us to think differently.” Next, Akhil Cherukupally, and David Onwukwe of Palo Alto Networks took the stage to walk through some of the technical components of the security platforms the company offers organizations navigating the cloud adoption process. Then Workspot’s Olga Lykova (@OlgaLykovaMBA) brought up Google Enterprise Account Executive Herman Matfes and Dung La and Angelo D’Aulisa of MSC for a look back through the history of the companies’ work together. Olga started things off with an origin story about the Citrix leaders who left their company to start a cloud-hosted platform with Workspot, which turned out to be a superior business model. Then she turned to the other guests to explore how Workspot helped MSC build automations on the front end of their business processes and ultimately implement these automations end to end.Speaker Panel at 2Gather: New York CityFinally, Drew, Angelo, Dung, Paul, and Nabin returned to the stage for a panel discussion breaking down all of the issues raised during the previous sessions. A question from Drew about how each organization’s work has impacted its customers prompted Paul to go long on the benefits of Granite’s services. When Angelo gently added, “We’re a Granite customer,” the audience laughed along with the panelists. “Thank you for being a customer,” Paul said. Drew also asked the group about what’s coming next at each company. The answers ranged from the concrete to the philosophical. “The technology is forcing us to think differently,” Nabin observed. In response to a question from a guest in the audience, Paul acknowledged the human impact of automation and stressed the importance of getting people to feel good about automating processes rather than fearing for the future of their jobs.As usual, the conversations did not stop here. The speakers and guests continued to share ideas and brainstorm solutions into the networking reception and even the informal dinner that followed, where Clair Hur (@write2clair) of Vimeo stopped by to explain how the company is cutting costs significantly after migrating from AWS to Google Cloud. More of these stories will be collected in our upcoming monthly recap post. For now, watch the full recording of the New York event here: Extra Credit:
The following article was written by C2C Global President Josh Berman (@josh.berman) as a member exclusive for TechCrunch. The original article is available here. The past two years have been exciting periods of growth for the cloud market, driven by increased demand for access to new technology during COVID-19 and the proliferation of the “work-from-anywhere” culture. IT leaders worked to shift workloads to the cloud to ensure business continuity for the remote workforce, leading to skyrocketing adoption of cloud computing. This momentum is expected to pick up in 2022 and beyond. For many businesses, the pandemic accelerated their digital transformation plans by months, or even years. Reliance on cloud infrastructure will only continue to grow as organizations adjust to the hybrid work model. Gartner projects that global spending on cloud services is expected to reach over $482 billion in 2022, up from $313 billion in 2020. As we start the new year, C2C, an independent Google Cloud community, has identified six cloud computing trends to watch in 2022. More people are harnessing new technologies The pandemic inspired a new generation of entrepreneurs. Whether out of necessity from mass layoffs, a desire for a more flexible lifestyle, or finding the inspiration to finally pursue a passion, millions have started their own ventures. As their businesses grow and digitize, entrepreneurs across industries are embracing the cloud and adopting technologies like machine learning and data analytics to optimize business performance, save time and cut expenses. There are countless benefits to small businesses and startups. For one, the cloud makes data accessible from anywhere with an internet connection, enabling the seamless collaboration necessary in a hybrid work environment. Without having to spend on expensive hardware and software, entrepreneurs can invest in other areas as they scale their businesses. We often see founders leveraging the power and ease of use of Google Cloud Platform AI and ML tools to rapidly prototype and build applications. They’ve used this technology to create unique and exciting solutions, like tools that use ML to analyze English pronunciation or ML that predicts one’s mood from their breath. There’s an increased desire for more direct access to product developersAs more users shift to the cloud, there is an increased desire to connect and network with product developers who have worked behind the scenes to bring the latest technologies to market. Online communities, like C2C, make having these conversations possible and easy. These conversations ultimately help users secure the right applications they need and successfully deploy them to ensure operational success. Greater emphasis on security in the cloud Last year, businesses looked to the cloud to reshape their operations and become more agile. While cloud computing certainly offers the benefits of that flexibility and productivity, it also puts organizations at risk of becoming more vulnerable to cyber threats and data breaches. For that reason, security is going to become a larger part of the cloud conversation throughout 2022 and beyond. This reality is going to influence a greater emphasis on building more security into the cloud. As the world continues to go digital, organizations are being tasked with ensuring security within the cloud is properly integrated into evolving business models. More organizations will seek out data solutions Almost all enterprises operate in multi-cloud environments. As a result, a lot of valuable data is spread across systems, creating a need to make data accessible to more analytics tools. Cross-cloud analytics solutions are on the rise to help data analysts manage all their insights. At C2C, we’ve discussed the noticeable rise in the number of people and companies looking at data solutions, specifically BigQuery, Google Cloud’s fully managed, server-less data warehouse. These companies are typically mapping out their data strategy, but, interestingly, some companies that are trying to work with AI and ML realize they need a solution that makes their data consistent and easy to store. Productivity tools will become even more sophisticated When the world was forced into a remote work model overnight during the pandemic, many companies were not prepared for the challenge of immediately shifting their processes to a virtual format. The ongoing challenge for many companies that have transitioned to a hybrid model has been determining how to best keep both remote and in-person team members engaged. This opened doors for cloud-based collaboration tools like Google Workspace, which are only going to become a bigger part of our day-to-day operations. These solutions have capabilities like document collaboration, integrated chat features, virtual whiteboards and more. Much of that growth has already occurred: Nearly 80% of workers are using collaboration tools for work in 2021, up from just over half of workers in 2019, according to Gartner research. Not only are more companies going to adopt these cloud-based collaboration solutions, but the solutions are going to be enhanced and evolve as the needs of the hybrid workforce change. Cloud certifications are becoming more sought after by employers As industries accelerate remote adoption of cloud technologies, certifications and other IT credentials are becoming increasingly important and sought after by employers. And more IT professionals see the benefits of earning these certifications as well. More than 90% of IT leaders say they’re looking to grow their cloud environments in the next several years, yet more than 80% of those same leaders identified a lack of skills and knowledge within their employees as a barrier to achieving this growth. It turns out, the next big challenge for companies will not be how to manage cloud technology, but how to find enough qualified workers certified in it.
On January 11, 2022, C2C members @antoine.castex and @guillaume blaquiere hosted a powerful session for France and beyond in the cloud space. C2C Connect: France sessions intend to bring together a community of cloud experts and customers to connect, learn, and shape the future of cloud. 60 Minutes Summed Up in 60 Seconds Yuri Grinshteyn, Customer SRE at Google Cloud, was the guest of the session. Also known as “Stack Doctor” on YouTube, Grinshteyn advocates the best way to monitor, observe and follow the SRE best practices as learnt by Google in their own service SRE teams. Grinshteyn explained the difference between monitoring and observability: Monitoring is “only” the data about a service, a resource. Observability is the behavior of the service metrics through time. To observe data, you need different data sources; metrics, of course, but also logs and traces. There are several tools available, but the purpose of each is observability: FluentD, Open Sensus, Prometheus, Graphana, etc. All are open-source, portable, and compliant with Cloud Operations. The overhead of instrumented code is quite invisible, and the provided metrics are much more important than the few CPU cycles lost because of it. Microservices and monoliths should use trace instrumentation. Even a monolith never works alone: it uses Google Cloud Services, APIs, Databases, etc. Trace allows us to understand North-South and East-West traffic. Get in on the Monitoring and Observability Conversation! Despite its 30-minute time limit, this conversation didn’t stop. Monitoring and observability is a hot topic, and it certainly kept everyone’s attention. The group spent time on monitoring, logging, error budget, SRE, and other topics such as: Cloud Operations Managed Services for Prometheus Cloud Monitoring Members also shared likes and dislikes. For example, one guest, Mehdi, “found it unfortunate not to have out of the box metrics on GKE to monitor golden signals,” and said “it’s difficult to convince ops to install Istio just for observability.” Preview What's Next Two upcoming sessions will cover topics that came up but didn’t make it to the discussion floor: If either of these events interests you, be sure to sign up to get in touch with the group! Extra Credit Looking for more Google Cloud products news and resources? We got you. The following links were shared with attendees and are now available to you! Video of the session Cloud Monitoring Managed Services for Prometheus Sre.google website SRE books Stack Doctor Youtube playlist
This C2C Deep Dive was led by Nathen Harvey (@nathenharvey), cloud developer advocate at Google who helps the community understand and apply DevOps and SRE practices in the cloud.The Google Cloud DORA team has undertaken a multi-year research program to improve your team’s software delivery and operations performance! In this session, Nathen introduced the program, research findings, and invited Google Cloud customer Aeris to demonstrate the tool in real-time.Participate in the survey for the State of DevOps Report by July 2.The full recording from this session includes:(1:40) Speaker introduction (3:20) How technology drives value and innovation for customer experiences (4:20) Using DORA for data-driven insights and competitive advantages (7:15) Measuring software delivery and operations performance Deployment frequency Lead time for changes Change fail rate Time to restore service (14:20) Live demonstration of the DevOps Quick Check with Karthi Sadasivan of Aeris (23:00) Assessing software delivery performance results Understanding benchmarks from DORA’s research program Scale of low, medium, high, or elite performance Predictive analysis by DORA to improve outcomes (29:30) Using results to improve performance Capabilities across process, technical, measurement, and culture Quick Check’s prioritized list of recommendations (37:40) Transformational leadership for driving performance forward Psychological safety Learning environment Commitment to improvements (41:45) Open Q&A Other Resources:Take the DORA DevOps Quick Check Results from the Aeris software delivery performance assessment Google Cloud DevOps
The Internet of Things (IoT) affects everything from street lighting, smart parking, air quality, ITS systems, IP cameras, waste collection, and digital signage. When IoT is managed, monitored, and maintained effectively, it changes everything from our cities to utilities, but we need to be on top of its challenges.These include: How do you connect thousands of IoT devices to back-office systems? How do you manage IoT platforms from multiple vendors? How can you install and maintain the various IoT devices, some more complex than others? If you run a team, how can you guide those workers through step-by-step workflows and diagnostics? That’s where IoTOps, short for IoT operating systems, comes in. In our Google stratosphere, we’re given Fuchsia OS to use. Think of it as a cloud-based SaaS solution built specifically for IoT. IoTOps integrates the data. IoTops helps you manage millions of IoT components, such as smart streetlights, traffic signals, power line sensors, garbage, parking, and air quality from one pane of glass. It integrates the various connected devices and with back-end systems quickly, easily, and efficiently. IoT management also includes the devices and the gateways that the IoT devices are connected to: a humongous enterprise. All step-by-step diagnostics can be managed and monitored from this one pane of glass for a finished product to reach smartphones and tablets. IoTOps speeds up the progress.By making your workflow configurable, IoTOps help you manage the entire life cycle of your IoT operations. You have your IoT planning, inventory, installation, maintenance, and work orders in one place, making the process operational and fast. IoTOps simplifies IoT management.IoTOps tames your IoT explosion by helping you manage its escalating volume of data from one place. Not every project needs the same level of management or the same degree of care and attention. IoTOps brings to the forefront those facets that need special attention and help you design, regulate, and monitor your network performance from one pane of glass. IoTOps connects the workforce.The IoT operating system helps IT and operations work together, much the same way as DevOps does. With all data displayed in one crucible, engineers can stream data to diverse IoT applications and update all back-office systems (e.g., GIS, CMS, asset management, network management, CRM, and billing). At the other end, local plant technicians can use the OS to monitor and troubleshoot the industrial network. In the event of a device failure, technicians can work on rapid device replacement. IoTOps gives you actionable results.IoT operations serve as an Analytics as a Service (AaaS) dashboard, giving you the insight to build on your IoT data for actionable results. Put another way, these operating systems provide you with visibility into your IoT projects’ inner workings and help you analyze the endless volume of data emitted from your connected devices. IoTOps detects threats.IoTOps alerts you to anomalies and changes in IoT response time, characteristics, and behavior. It’s like an integration Platform as a Service (iPaaS), which standardizes how applications are integrated across the workflow. When differences are detected, it brings them to your attention promptly, so you can act on these cues instantly and prevent mishaps, such as network breaches. IoTOps saves human labor, costs, and productivity.IoTOps reduces downtimes by catching mishaps right away. Its timely intervention saves you the expense of fixing or replacing components. And its lean workflow does away with data drift, giving operations and data scientists the creativity and motivation to continue their work. You also don’t need to hire the many other specialists you would otherwise have required for deployment. IoTOps helps with incident management. The IoT impact on infrastructure and operations (I&O) can be significant, which is why it's crucial to catch mishaps in their beginning stages. An IoT OS helps you see the mass of connected facets and incoming data across environments, making the learning process error-resilient, stopping software from getting lost, and keeping your team on the same page. Very simply, IoT operating systems are platforms that help you manage, monitor, and maintain your IoT operations. Their value is immense. They allow you to complete important IoT projects from start to finish, where all components are configured precisely according to manufacturing network and security specifications with faster completion times. There are no more missing items or IoT elements that misbehave once they're configured in the network. Costly downtime and sunk expenses are a thing of the past since platforms like Google Fuchsia OS automate your IoT projects in a streamlined CI/CD process. Let’s Connect!Leah Zitter, Ph.D., has a master’s in philosophy, epistemology, and logic and a Ph.D. in research psychology.
MLOps—or machine learning (ML) and operations—is the equivalent of DevOps but with a significant difference. DevOps concerns itself with the working model of ML. It focuses on the actual software delivery cycle, working to close the gap between development and IT teams so they build, test, and release software faster and more reliably. MLOps aims to achieve the same results—in a data science and ML context. As the team at Google Cloud says, “the real challenge isn't building an ML model. The challenge is building an integrated ML system and continuously operate it in production.”For a successful ML model, several processes must be in place and continuously work well together, resembling the continuous, flawless, and high-quality delivery of an assembly line that produces expert results without fail. Here are nine reasons why it is essential for developer productivity. 1. MLOps make the ML process faster. Since the ML process involves countless steps—from design to development, testing, and delivery—engineers need a function that cuts through the manual sluggishness and expedites the cycle. Without MLOps, the process is time-consuming, especially if the model was upgraded through different ML frameworks. Communication between other teams would also require diverse sign-offs and tedious back-and-forths, dragging out an already slow process to months if not years. 2. MLOps automates the ML process. A regular ML process would be highly manual, with code written from scratch with each use case. There would also be numerous bottlenecks, resulting in software getting stuck at any stage in the process and work stopped indefinitely. Software may never make it to the finish line. ML platforms that help you with MLOps can help you avoid bottlenecks by keeping all versions of the work documented, stored, and shared. Stakeholders set KPI benchmarks, and the project flows on to completion.3. MLOps creates repeatable workflows. MLOps allows custom-built steps to be reused, leveraged, and built on, not just by the author, but also by other data scientists from your team and organization. Just as DevOps shortens the production life cycles by improving products with each iteration, MLOps drives insights by shortening the life cycle between the ML training and development stage. 4. MLOps make the ML process error-resilient. The ML manual process is drastically error-prone with issues like a training-serving skew. The lack of coordination between the operations and data science teams leads to unexpected online and offline performance differences. Data scientists who work on ML need to know that the result matches their trained model in a real-time setting. For that, they need to have a streamlined CI/CD (continuous integration/continuous delivery) process, where there is a constant loop-back device between dev and ops, so engineers can improve the model and rapidly deploy.Such an error-prone process is fundamental in a workplace environment where you get new engineers all the time. A managed approach, achieved through MLOps, stops software from getting lost and keeps your team on the same page. 5. MLOps prevent fatigue. A manual ML process turns your energetic, promising crew of data scientists into frustrated and underutilized engineers who feel they're spinning in an endless Sissyphusian circle. MLOps does away with data drift, giving operations and data scientists the creativity and motivation to continue their work. You’re more likely to get promising insights and actionable results. 6. MLOps reduce bias. Sometimes, MLOps can guard against certain biases in their algorithms that, if undetected and corrected, can harm under-represented people in fields such as health care, criminal justice, and hiring. Overlooked biases in marketed software can also dent the company’s reputation and expose the company to legal scrutiny. 7. MLOps lead to actionable business value. Close the training-to-operations loop faster, and you turn ML insights into actionable results. Each stage of the process seamlessly connects with and flows into the next, workers of different teams collaborate, bottlenecks disappear, leading to productive outcomes. 8. MLOps helps you with regulatory compliance. The regular ML process is held accountable to a slew of government compliance and ethical obligations on data security, machine ethics, and data governance.MLOps frees your data team to focus on what they do best: creating and designing software, while MLOps allows your operations team to concentrate on the ins and outs of management and regulations. 9. MLOps facilitates team communication. Each team has its particular talents. Without MLOps, your operations teams would be unable to communicate with your data engineers, data scientists, software engineers, and vice versa, resulting in wasted human potential. There would be wasted software potential, too, with promising software designs and solutions held up in the deployment or some earlier stage, rendering them ineffective. Bottom Line Why do we need MLOps? Here’s what the engineers at Google Cloud say: With the long history of production ML services at Google, we've learned that there can be many pitfalls in operating ML-based systems in production.A platform for MLOps helps you shorten the system development life cycle and ensure that high-quality software is being developed continuously and delivered and maintained in production. Done well and consistently followed through, MLOps can be a game-changer for your company in that it eliminates waste, automates the ML cycle, and produces richer, more consistent insights. Let’s Connect!Leah Zitter, Ph.D., has a master’s in philosophy, epistemology, and logic and a Ph.D. in research psychology.There is so much more to discuss, so connect with us and share your Google Cloud story. You might get featured in the next installment! Get in touch with Content Manager Sabina Bhasin at email@example.com if you’re interested.Rather chat with your peers? Join our C2C Connect chat rooms!
This was written by Leah Zitter, Ph.D. Problem: You've got this massive data flowing in from multiple sources—Google Cloud, your private cloud, Azure, AWS, or others—flooding you with noise. You simply don't have the time or ability to identify which alert is essential, which to overlook. And that's a pity because you may inadvertently miss something urgently, like an unusual spike in traffic, which could be indicating a possible cybersecurity concern. That's where AIOps—short for artificial intelligence for IT operations—comes in. These algorithmic operations help you combine ML with big data to troubleshoot and automate IT operations processes. AIOps Accurately Identifies the Root Cause AIOps accurately identifies root cause in at least three areas: The correlation, or co-occurrence of events, is where AIOps helps you find the common root of several IT processes that are short-circuiting at the same time. The topology, or the actual physical connections between items, is where AIOps helps you identify where things started going wrong in one or more items. The clustered causes, where if you’ve got, say, a sequence of events or a cluster of similar events, AIOps helps you identify which of the causal events in this sequence or cluster caused the breakdown. These three points help you identify where and why things go wrong and rush mean time to detection (MTTD), which means AIOps simply enables you to detect the problem faster than running a manual configuration of the IT system. AIOps Creates a Single Pane of Glass for Alert Data You've got data coming in across vectors, such as from Microsoft Azure, your native systems, VPN gateways, Amazon Web Services, and so forth. AIOps helps you cluster this mass of data on one platform.This makes things easier for systems specialists who simply need to visit alerts on one pane of glass to identify and resolve problems and automate solutions. It helps you see data across environments or, in other words, enables you to put the entire hybrid cloud in one place. AIOps Offers Intelligent Incident Management So you’ve got all this data coming in. What do you do with it? AIOps helps you assign the stream of incoming alerts into relevant groups to resolve the different issues.Example: AIOps assigns events that show similar patterns to the silo for IT operations management, events that show incident factors to IT service management, and so forth. Each cluster of events is then assigned to a relevant agent.This improves mean time to recovery (MTTR), helping the right person get the right work done faster. This operation automates your business to become better, faster, and more efficient. AIOps: Anomaly and Threat Detection AIOps helps us identify associations and helps us determine if something’s wrong in the first place. That’s anomaly detection. In other words, AIOps alert us to sudden changes of behavior or a sudden change in data. It looks at values over time and determines if some sort of abnormality is happening.Example: AIOps tell us if one of our systems is getting an unusual amount of traffic, indicating a possible cyber breach. AIOps Forecasts Events AIOps helps with predictive analytics, where it uses data to help us forecast a behavior before it happens. Example: Unlike in traditional technology, where you hit your storage limit without warning, AIOps warns that (for example), “You’re 14 days from hitting 90% capacity.” Forewarned is forearmed. AIOps Resolves Issues Automatically Now that AIOps helped you identify the problem, you’re able to fix the issue with some sort of scripting or external orchestration (also called runbook automation) to prevent the issue from recurring. In other words, you automate the solution so that processes run faster and more accurately, without the need to reconfigure them each time something goes askew manually. AIOps: Incident Management AIOps logs a record of the troubleshooting incident, such as “The system could remediate this problem.” Or “We tried x, y, script and finally used z.” Such records could help the IT team fix similar disruptions cheaper, faster, and more efficiently. If that solution falls through, all you need to do is retrace your steps to explore alternative solutions. AIOps for the Future World AIOps assigns incoming alerts to relevant IT containers, so the right agent can identify the problem, automatically remediate the issue, predict and prevent other adverse events from occurring, and log a record of the event for incident management. AIOps integrates information from multiple sources on one single pane of glass for a system administrator to read and interpret that information more easily. Put otherwise, AIOps helps you do everything from discovery to resolution and enables you to reduce the time it takes to troubleshoot events, so your business can quickly spring back to operations.In the World of the Future (that’s actually the world of the present), AIOps is the last word in your ability to adjust to unexpected and constantly changing IT environments. With the recent shift to remote work, AIOps helps us understand, troubleshoot, and automate IT processes across enterprises for competitive business value. It's this digitization that's the make or break of our company. Extra Credit Google Cloud introduces pipelines for those beyond ML prototyping Setting up an MLOps environment on Google Cloud Advanced API Ops: Bringing the power of AI and ML to API operations An introduction to MLOps on Google Cloud
Priyanka Vergadia, a developer advocate at Google, has created more than 300 videos, articles, podcasts, courses, and tutorials to help developers learn Google Cloud fundamentals, solve their challenges, and pass certifications. Or, in other words, she's your go-to Cloud Girl.Vergadia will be sharing her excellent content with the C2C community, and we're excited to embrace her creative solutions to complicated tech. Our first post from Vergadia is about where to run your systems. Have you ever wondered how a tech stack would come together? Take a look at the sketch, and feel free to share your questions on our C2C Community platform (join here) or with Vergadia on Twitter!Want to know more about who Vergadia is and why she started #GCPSketchnotes? A profile featuring Cloud Girl will be coming soon!
Originally published on December 4, 2020.In this C2C Deep Dive, product expert Richard Seroter aimed to build the foundations of understanding with live Q&A. Here’s what you need to know:What is Anthos? In its simplest form, Anthos is a managed platform that extends Google Cloud services and engineering practices to your environments so you can modernize apps faster and establish operational consistency across platforms. Why GCP and Anthos for app modernization? Responding to an industry shift and need, Google Anthos “allows you to bring your computing closer to your data,” Seroter said.So if data centers are “centers of data,” it's so helpful to have access to that data in an open, straightforward, portable way and to be able to do that at scale and consistently. Hear Seroter explain how this can help you consolidate your workloads. First-generation vs. second-generation cloud-native companies: What have we learned? The first generation was all about infrastructure automation and continuous delivery (CD) mindset at a time when there wasn’t much research into how to make it happen. So some challenges included configuration management, dealing with multi-platforms, or dealing with security.Now, as Richard Seroter explains more in this clip, the second generation is taking what has been learned and building upon it for sustainable scaling with a focus on applications. Is Unified Hub Management possible through Anthos for the new generation? Yep. Anthos offers a single-management experience, so you can manage every Anthos cluster in one place, you can see what they’re doing, but you can push policy back to them, too. You can apply configurations and more to make it easy for billing and management experience. Serverless anywhere? You bet. Use Cloud Run for Anthos. Building upon the first generation of the platform as a service (PaaS), GCP brings Cloud Run for Anthos as a solution to needing more flexibility and building on a modern stack. Besides being Richard Seroter’s favorite, it balances the three vital paradigms existing today: PaaS, Infrastructure as a service (IaaS), and container as a service (CaaS).Watch the clip to hear Seroter explain the how and the why. What about a GitOps workflow and automation—is scaling possible? Yes, by using Anthos Configuration Management (ACM), policy and configuration are possible at scale. You can manage all cloud infrastructure, not just Kubernetes apps and clusters, and even run end-to-end audits and peer review. Watch to learn how this works. Question from the community: Are capabilities Hybrid AI and BigQuery available for Anthos on-prem?With Hybrid AI for Anthos, Google offers AI/ML training and inferencing capabilities with a single click. Google Anthos also allows for custom AI model training and MLOps lifecycle management using virtually any deep-learning framework. Prefer to watch the whole C2C Deep Dive on Application Development with Anthos?
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.OK
Sorry, our virus scanner detected that this file isn't safe to download.OK