Browse articles, resources, and the latest product updates.
Are you a cloud architect or administrator, or do you work in SysOps or DevOps? Do you want to create new solutions or integrate existing systems, application environments, or infrastructure with Google Cloud? The Certified Kubernetes Administrator (CKA) program is an excellent way to level up your skills.To help you get familiar with this Kubernetes learning pathway, Tim Berry, Head of Cloud Training at Appsbroker, joined this 2Learn event to talk about how to:Use Kubernetes for easy app deployment Accelerate learning through presentations, demos, and hands-on labs Deploy practical solutions, including security and access management, resource management, and resource monitoring Access all the knowledge and skills needed for CKA certificationAppsbroker’s trainings are for engineers, by engineers. They follow the curriculum of the CNCF Certified Kubernetes Administrator exam and offer courses for beginner, intermediate, and advanced users.Visit the Appsbroker YouTube channel to learn more and join the C2C Community to continue this conversation. Watch the full recording of the event here:
On August 30, 2022, C2C joined forces with our partners at DoiT to host a 2Gather event all about modernizing your organization on Google Cloud. Presented live at Google’s office in the repurposed Spruce Goose hangar in Playa Vista, California, Google Cloud Modernization with DoiT offered a deep exploration of the practices and technologies DoiT uses to help organizations modernize their resources and infrastructure on its Cloud Management Platform. DoiT’s Yuval Drori Retziver (@yuval) delivered the main program, comparing and contrasting the capabilities and advantages of Google Cloud Run and the Google Kubernetes Engine.Yuval prefers Cloud Run’s serverless, pay-per-use model, but he also made a point of mentioning numerous features and benefits of Kubernetes, including liveness, readiness, and startup probes and horizontal pod autoscaling. Even when Yuval offered to skip slides reviewing details familiar to most users, the crowd urged him to cover everything he had prepared. The various options for modernization Yuval described illustrated North America Head of Google Cloud Customer Community Dale Rossi (@Dale Rossi)’s comment that “As a Google Cloud customer, or any customer, it’s a journey.”Watch the full recording here: Extra Credit:
On May 12, C2C hosted its first east coast event at Google’s New York office. We believe in-person connections are invaluable to everyone in our community, especially when our members are able to immediately converse with amazing speakers who are sharing their journeys and business outcomes.The stories from this event—presented on stage from Google Cloud customers, partners, and employees—can all be reviewed below. A Warm Welcome from C2C and Google Cloud Opening the event was Marco ten Vaanholt (@artmarco), who leads C2C initiatives at Google Cloud. To kick things off, Marco prompted the audience to get to know each other, and all enthusiastically turned to their table neighbors. After Marco covered the history of C2C and our early adventures in hosting face to face events, Marcy Young (@Marcy.Young), Director of Partnerships at C2C, followed to reiterate our mission statement: we’re here to connect Google Cloud customers across the globe. Since March of 2021, when the C2C online community first launched, our community has grown in size to make valuable connections with people like Arsho Toubi (@Arsho Toubi), Customer Engineer, Google Cloud, who followed Young to introduce C2C’s partner speakers.All three introductory speakers emphasized the excitement of being able to make new connections in person again. As ten Vaanholt put it, peers introducing themselves and initiating new relationships is “the start of community building.” When Toubi announced “I received some business cards, and that was a fun experience I haven’t had in two years,” the room responded with a knowing laugh. Toubi also asked the Googlers in the room to stand up so others could identify them. “These are my colleagues,” she said. “We’re all here to help you navigate how to use GCP to your best advantage.” Getting to Know AMD and DoiT C2C partners and the sponsors for this event, DoiT and @AMD shared updates of the partnership between the two companies focused on cloud optimization.Michael Brzezinski (@mike.brzezinski), Global Sales Manager, AMD Spenser Paul (@spenserpaul), Head of Global Alliances, DoiTBrzezinski framed the two presentations as a response to a question he received from another attendee he met just before taking the stage, a question about how the two companies work together to enhance performance while reducing cost. One half of the answer is AMD’s compute processors, which Brzezinski introduced one by one. To complete the story of the partnership between the two companies, Spenser Paul of DoiT took the stage with his Labrador Milton. “I’m joining the stage with a dog, which means you won’t hear anything I’m saying from here on,” he said as he took the microphone. “And that’s totally okay.” The key to minimizing cost on AMD’s hardware, Paul explained, is DoiT’s Flexsave offering, which automates compute spend based on identified need within a workload. A Fireside Chat with DoiT and CurrentSpenser Paul, Head of Global Alliances, DoiT Trevor Marshall (@tmarshall), Chief Technology Officer, CurrentPaul invited Marshall to join him onstage, and both took a seat facing the audience, Milton resting down at Paul’s feet. After asking Marshall to give a brief introduction to Current, Paul asked him why Current chose Google Cloud. Marshall did not mince words: Current accepted a $100,000 credit allowance from Google after spending the same amount at AWS. Why did Current stay with Google Cloud? The Google Kubernetes Engine. “I like to say we came for the credits, but stayed for Kubernetes,” Marshall said. Paul wryly suggested the line be used for a marketing campaign. The conversation continued through Current’s journey to scale and its strategy around cost optimization along the way.When Paul opened questions to the audience, initially, none came up. Seeing an opportunity, Paul turned to Marshall and said, “Selfishly, I need to ask you: what’s going to happen with crypto?” Just in time, a guest asked what other functionalities Current will introduce in the future. After an optimistic but tight-lipped response from Marshall, another moment passed. Marshall offered Paul a comforting hand and said, “We’re all going to make it through,” before fielding a few more questions. Panel Discussion All our presenters, with the addition of Michael Beal (@MikeBeal), CEO, Data Capital Management reconvened on stage for a panel discussion. Toubi, who moderated the conversation, began by asking Michael Beal to introduce himself and his company, Data Capital Management, which uses AI to automate the investment process. Beal ran through Data Capital Management’s product development journey, and then, when he recalled the company’s initial approach from Google, playfully swatted Marshall and said, “The credits don’t hurt.” Toubi then guided Beal and Brzezinski through a discussion of different uses cases for High Performance Computing, particularly on AMD’s processors.When Toubi turned the panel’s attention to costs, Paul took the lead to explain in practical detail how DoiT’s offerings facilitate the optimization process. “I have an important question,” said Toubi. “Can DoiT do my taxes?” Then she put the guests on the spot to compare Google Cloud to AWS’s Graviton. Brzezinski was ready for the question. The initial cost savings Graviton provides, he explained, don’t translate to better price performance when taking into account the improved overall performance on Google Cloud. Other questions covered financial services use cases for security, additional strategies for optimizing workloads for price performance, and wish-list items for Google Cloud financing options.Marco ten Vaanholt kicked off the audience Q&A by asking what a Google Cloud customer community can do for the customers on the panel. Marshall said he’s interested in meeting talented developers, and Beal said he’s interested in meeting anyone who can give him ideas. As he put it, “Inspiration is always a very interesting value proposition.” After a couple more questions about estimating cost at peak performance and addressing customer pain points, Toubi asked each panelist to offer one piece of advice for someone considering using Google Cloud who isn’t already. Again, Paul saw a shot and took it. “If you’ve never been to Google before,” he said, “Come for the credits, stay for the Kubernetes.” Winding Down Following the presentations, all in attendance broke away to connect during a networking reception. To read more about it, check out the exclusive onsite report linked below in the Extra Credit section, and to get involved in the customer-to-customer connections happening in person in the C2C community, follow the link to our live event in Cambridge, MA to register and attend. We look forward to seeing you there! Extra Credit
Organizations with all kinds of storage and hosting needs are adopting cloud infrastructure as a preferred solution. For these organizations, this means lower costs, faster speeds, and enhanced performance in general. What does this mean for the teams managing this infrastructure? In many cases, it means adapting to new strategies and a new environment. Among the most popular of these strategies right now is containerization, and among the most popular of these environments is Kubernetes.Mattias Gees is a solutions architect at Jetstack, a cloud-native services provider building enterprise platforms using Kubernetes and OpenShift. With Kubernetes and Containerization gathering momentum as a topic of interest among our community members and contributors in recent months, we wanted to invite Gees to share what he has learned as a containerization specialist using Kubernetes on a daily basis. Gees and other representatives of Jetstack, a C2C Platinum partner, were excited to join us for a Deep Dive on the topic to share some of these strategies directly with the C2C community.Gees started his presentation with some background on Jetstack, and then offered a detailed primer on Kubernetes and its capabilities for containerizing on the cloud. This introduction provided context for Gees to introduce and explain a series of containerization strategies, starting with load balancing on Google Kubernetes Engine:Another strategic solution Gees pointed out was one that has also been a frequent topic of discussion within our community, Continuous Delivery and Continuous Deployment (CD):Kubernetes is a complex and dynamic environment, and different cloud engineers and architects will use it in different ways. To give a sampling of the different potential strategies Kubernetes makes available, Gees listed some advanced Kubernetes features, including health checks, storage of config and secrets in Kubernetes objects, basic autoscaling, and advanced placement of containers:https://vimeo.com/645382667The most impressive segment of Gees’ presentation was his overview of the Kubernetes platform, including a screenshot of his own cloud-native landscape:Gees concluded the presentation with a breakdown of the different team roles associated with Kubernetes modernization, stressing that implementing the many containerization strategies he identified is not the work of one person, but many working in concert toward common goals:Are you an architect working with Kubernetes in a cloud-native environment? Do you prefer any of these containerization strategies? Can you think of any you’d like to add? Reach out to us in the community and let us know! Extra Credit:
Cloud-first companies see cloud-native Kubernetes technology as the key to building modern application infrastructure with high scalability and internal developer process automation. While many companies have started their Kubernetes adoption, the program can often encounter challenges and complexities soon after its launch.The recording from this Deep Dive includes:(2:00) Introduction to Jetstack (3:35) Agenda overview (4:00) Introduction to cloud native, Kubernetes, and microservices (9:45) Kubernetes and monolith application servers (14:00) Continuous delivery, GitOps, and advanced Kubernetes features (17:25) Picking the right workloads to migrate (19:55) Building application platforms to run on Kubernetes (23:30) Knowledge sharing and understanding the team needed to run Kubernetes projects (26:10) Final takeawaysThis Deep Dive was presented by Jetstack, a foundational platinum partner of C2C and a Google Cloud Premier Partner which builds enterprise cloud-native platforms using Kubernetes and OpenShift. To connect with them, find @RichardC here in the community.
With companies migrating resources to the cloud and adopting cloud-native technologies at an accelerating rate, containerization is becoming a new term of art. For organizations and users just beginning or still planning their cloud journeys, the term can be obscure. Containerizing on the cloud is a developing practice, but it’s based on simpler concepts anyone can recognize.Think of software containers as IT shipments that bundle each application and its runtime environment into a standalone, executable package. These containers move from environment to environment - across testing, staging and production infrastructures. They’re lightweight, fast, scalable and secure, preventing software flaws from slipping out and affecting the environment. With attributes like these, containers are ideal for hosting data and software on the cloud. The Origins of Containerization Early on, organizations ran applications on multiple physical servers, but these were costly and difficult to maintain, prompting developers to turn to virtualization. Virtualization allows applications and their components to run on the same physical server in isolated virtual machines (VMs). Containers are similar to VMs but, unlike VMs, are decoupled from the server’s underlying physical infrastructure and are, therefore, more lightweight, making them portable across clouds and open-source distributions. Put another way, rather than having separate open-source guests on the host server like VMs, containers share that same open-source kernel. This makes them cheaper and simpler to use, enabling developers to simply “carry” their software tools from one environment to another, instead of recreating these containers from scratch. What is Containerization? Containerization refers to packaging software code and its components in single “containers”. When and how resources are containerized can depend on a number of factors: Runtime tools System tools System libraries Settings Multiple containers can be employed across a single operating system (OS) and share that same open-source kernel, which helps them run consistently in any environment and across any infrastructure, regardless of that infrastructure’s OS. For example, containers make it possible to transfer code from a desktop computer to a virtual machine (VM) or from a Linux to a Windows operating system, and to transfer that same code with its dependencies to public, private or hybrid clouds. Numerous benefits account for the popularity of containerization as a solution. Containers are: Far more agile, efficient and portable than VMs. Perfect for continuous development, integration, and deployment with quick and efficient rollbacks. Cheaper than VMs. Extremely fast. Easy to manage. Constant across OS environments; they run the same on laptop as they do in the cloud. Extremely secure due to their isolated nature. What Applications and Services Are Commonly Containerized? Some computing paradigms especially suit containerization, including: Microservices, whereby developers can bundle single functions of their tasks into customized “packages”. Databases, whereby each app is given its own database, eliminating the need to connect all to a monolithic database. Web servers that need only a few lines of command on a container. Containers within VMs, which save hardware space, augment security, and talk to specific services in the VM. Examples of Containers Docker is one of the most popular containers due to its speed, execution and holding power. Some other popular containers include: CoreOS rkt Mesos Containerizer LXC Linux Containers OpenVZ CRI-O Containerization orchestration As more containers are used, enterprises will need systems to run them. Kubernetes, the most prominent container orchestration platform, helps control, monitor, and maintain your containers at massive scale.Google Cloud offers its own containers to standardize software deployments across multiple machines and platforms. (This book provides a helpful guide). Google Kubernetes Engine helps IT teams manage troops of containers and automate software deployment.Have you worked with containers before? Can you think of any points you’d like to add? Reach out and let us know! Extra Credit:
Kubernetes (K8s) is a container orchestration platform that makes sure each software container is where it’s supposed to be and that containers can work together. Containers are fashioned so that developers can migrate code and its components from one environment to the next in a portable, lightweight way with minimum overhead. Kubernetes, Greek for “helmsman” or “pilot”, is one of the most popular platforms for automating, deploying, scaling, controlling, monitoring and maintaining these containers. Problems with Containers If you’re a developer, you probably often find yourself asking questions like:What happens if a container goes down? How do we keep the system running? How do we get containers to communicate? How do we observe what’s going in containers? What’s the protocol for finding containers? How do we organize containers? How do we scale containers up or down?Deploying containers is one thing. Managing them is slow, inefficient and full of holes. That’s where Kubernetes comes in. How Kubernetes Works Kubernetes resolves these issues through a system of clusters, nodes, pods, and kubelets that intercommunicate to monitor and automate containers. It scales or descales these containers, aggregates or segregates them, and heals faulty containers, among other functions. It also restarts orphaned containers, shuts down containers when they’re not being used, and distributes containers in a logical and efficient way. At its core, Kubernetes gives you power over what gets done by managing, scaling and monitoring your containers.The following are the main components of the Kubernetes system:The control plane is the brain of the cluster, feeding it incoming and outgoing signals (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied). Nodes run the containers. They are the worker machines, with each node running an “agent” called a kubelet. Kubelets are tiny applications that run on the machine and communicate with the control plane (or Master Node), relaying its signals. Pods are the smallest units of Kubernetes. Each node has multiple pods. Etcd is the registry that stores data about the cluster. Objects define the desired state of the cluster. By creating an object, you're telling the Kubernetes system what you want your cluster's workload to look like. Working with Kubernetes Containers are stored as clusters of nodes. Each node has endpoints, DNS, pods, and kubelets, and is overseen by the master node, or control plane. You specify the desired state of your clusters with “objects”, achieved through a YAML configuration file listing the processes you want up and running. The control plane of Kubernetes then prompts the nodes and their minions (pods and kubelets) to automate your bidding.Nodes help containers spread by adding scalability to the cluster. They also provide for fault-tolerance and a replica set for availability in case of downtime. Risk is distributed so that no running process can be taken down by a single point of failure. Kubernetes is self-healing, always returning the system to the ideal state specified by your objects in the deployment, by either curing or slaying flawed containers. Kubernetes uses new container images pulled from the Etcd registry as rolling updates for a smooth, stable transition.The following are some of the benefits of working with Kubernetes:Load balancing, namely distributing traffic among the various containers. Tracking containers through their DNS names or IP addresses. Storage through local services, public cloud providers, and more. Automated rollouts and rollbacks. You can automate Kubernetes to create new or remove existing containers. Self-healing capability. Kubernetes restarts stalled containers, replaces others and kills those that are hopelessly flawed. Deep security. Kubernetes secures sensitive information, such as passwords, OAuth tokens, and SSH keys. In Short: As this handy Google Cloud cartoon strip illustrates, if you’re trying to manage hundreds or even thousands of containers, Google Cloud’s Kubernetes Engine monitors, controls, and automates them. Extra Credit:
Whether you’ve been an Application Engineer since the nascent days of cloud computing or you’re in the process of pivoting your career, the C2C Community is a highly collaborative and connective forum experience where those looking to enter the industry can connect with industry experts and veterans.For those looking to network with experts in the cloud community, C2C can help cloud engineers of any expert level grow their skills and advance their careers. One of the best ways to prepare for an upcoming interview with Google Cloud Platform technology specialists is to prepare some relevant and specific interview questions that demonstrate your deep and unique understanding of your respective area of expertise.Find out more about Kubernetes and Google Cloud Platform interview questions and directly network with those in the Google Community. Your career and your mind will go far when you take part in the discussion on C2C. Connect With Google Cloud Architects & Other Industry InsidersThe C2C Community is an online forum experience for serious Google Cloud industry insiders and enthusiasts. Explore engaging topics of interest, join various discussions, and gain invaluable perspectives on the competitive cloud engineering job market. In addition, learn all about Google Cloud Architect interview questions and the ins and outs of the interview process so you can feel prepared to take the next step in your career.Additionally, community members can explore specific areas of interest, discussing Kubernetes interview questions while getting exclusive insight from past and current Google employers. Ask questions in a forum-like setting and review responses from industry veterans. C2C is a Google Cloud Community committed to enriching the lives, minds, and careers of tenured Cloud Engineers, Architects, and even those looking to get their foot in the door. Kubernetes Interview Questions from the C2C CommunityWhether you’re joining a well-established team, you’re being brought on to help improve an internal process, or you’re just prepping for an upcoming interview with Google, try integrating some of these Kubernetes and Google Cloud Platform interview questions from experts in the C2C Community to impress your interviewer.General Kubernetes Interview Questions:While you certainly want to reflect your more profound knowledge of cloud computing with your Google Cloud Platform interview questions, some general knowledge you want to have when stepping into an interview requires you to understand Kubernetes and container orchestration. What do you know about Kubernetes? What difference do you find between Docker Swarm and Kubernetes? What similarities do you find between Docker and Kubernetes? When would a cloud infrastructure use Docker over Kubernetes and vice versa? What difference do you find between deploying applications on the host and in containers? What are clusters in Kubernetes? What is Google Container Engine? Google Cloud Architect Interview Questions:Of course, an integral part of demonstrating your understanding of Kubernetes is to prepare yourself to answer some architecture-specific questions. Here are some great Google Cloud architecture interview questions below: What is the role of the Kube-API server and Kube-scheduler? Can you explain about the Kubernetes controller manager? What is a load balancer in Kubernetes? What are the different types of services in Kubernetes? What do you understand about Cloud controller managers? Multiple Choice Kubernetes Interview Questions:Multiple choice offers an excellent opportunity to put your expertise on display and test your creativity. Here are some excellent multiple-choice Kubernetes interview questions that will take any Google Cloud engineering conversation and interview to the next level: What are minions in the Kubernetes cluster? What was the latest Kubernetes version update, and what did it introduce? What are the responsibilities of a Replication Controller? Which of the following are core Kubernetes objects? Kubernetes cluster data is stored in which of the following? Grow a Promising Career in Technology With C2CThe C2C Community is always facilitating interesting conversations around cloud computing, industry expertise, and the future of cloud innovation. We have a collaborative online community and forum experience that makes it easy to converse with other experts in the industry and tap into a well of knowledge that deepens with every new member, online discussion, and expert Q&A.Now, it’s your turn to join the conversation. What are some other Google Cloud Platform interview questions you’ve heard in the past that you felt were exciting? Or perhaps you were on the receiving end of an interviewee's interesting cloud-related interview questions that made for an engaging, future-focused conversation about computing. Share your answers and start a conversation with other members of the C2C Community.
The Big Question—Can You Use Python in the Cloud? Python is an excellent tool for application development. It offers a diverse field of use cases and capabilities, from machine learning to big data analysis. This versatility has allowed the creation of a real niche for Python cloud computing. And now, as DevOps becomes more and more cloud-based, Python is also making its way into cloud computing as well.However, that’s not to say that running Python can’t come with its own set of challenges. For example, applications that perform even the simplest tasks need to run 24/7 for users to get the most out of their capabilities, but this can take up a lot of bandwidth—literally.Python can run numerous local and web applications, and it’s become one of the most common for scripting automation to synchronize and manipulate data in the cloud. DevOps, operations, and developers use Python as a preferred language, mainly for its many open-source libraries and add-ons. It’s also the second most common language used on GitHub repositories. Today we’re talking about running Python scripts on Google Cloud and deploying a basic Python application to Kubernetes. How to Use Google Cloud for Programming Businesses all over the world can benefit from cloud options. Both cloud-native and hybrid structures have technological benefits like data warehouse modernization and levels of security compliance that help fortify the development process and run continuously. But running code on Google Cloud requires a proper setup and a migration strategy—specifically a Kubernetes migration strategy—if you intend to orchestrate containerization.Generally speaking, however, any code deployed in Google Cloud is run by a virtual machine (VM). Kubernetes, Docker, and even Anthos make application modernization possible for large applications. In the case of smaller scripts and deployments, a customizable VM instance is adequate for running Python script on Google Cloud and determining processor size, the amount of RAM, and even the operating system of choice for running applications. 1. Check the Requirements for Running Python Script on Google Cloud Before you can work with Python in Google Cloud, you need to set up your Python development environment. After that, you can code for the Python cloud environment using your local device, but you must install the Python interpreter and the SDK. The complete list of requirements includes: Install the latest version of Python. Use venv to isolate dependencies. Install your favorite Python editor. One popular Python Integrated Development Environment (IDE) is PyCharm. Install the Google Cloud SDK (gcloud CLI) for Python to access Google Cloud Install any third-party libraries that you prefer. 2. Google Container Registry and Code Migration To begin scheduling Python scripts on Google Cloud, teams must first migrate their code to the VM instance. For Python VM setup, many experts recommend using Google Container Registry for storing Docker images and the Dockerfile.First, you must enable the Google Container Registry. The Container Registry requires billing set up on your project, which can be confirmed on your dashboard. Since you already have the Cloud SDK installed, use the following Python gcloud command to enable the registry:gcloud services enable containerregistry.googleapis.comIf you have images from third-party images, Google provides step-by-step instructions with a sample script that will migrate to the Registry. You can do this for any Docker image that you store on third-party services, but you may want to create new projects in Python that will be stored in the cloud. 3. Creating a Python Container Image After you create a Python script, you can create an image for it. A Docker image is a text file that contains the commands to build, configure, and run the application. The following Docker example shows you the content of a Dockerfile used to build and image:# syntax=docker/dockerfile:1FROM python:3.8-slim-busterWORKDIR /appCOPY requirements.txt requirements.txtRUN pip3 install -r requirements.txtCOPY . .CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]After you create the image, you can now build it. Use the following command to build it:$ docker build --tag python-dockerThe --tag option tells Docker what to name the image. You can read more about creating and building Docker images here.After the image is created, you can move it to the cloud. You must have a project set up in your Google Cloud Platform dashboard and be authenticated before migrating the container. The following command migrates the image to Google Cloud Platform:gcloud build submitThe above basic commands will migrate a sample Python image, but full instructions can be found in the Google Cloud Platform documentation. 4. Initiating the Docker Push to Create a Google Cloud Run Python Script Once the Dockerfile has been uploaded to the Google Container Registry and the Python image has been created, it’s time to initiate the Docker push command to finish the deployment and prepare the storage files. A Google Cloud run Python script requires creating two storage files before a developer can claim the Kubernetes cluster and deploy it to Kubernetes.The Google Cloud Run platform has an interface to deploy the script and run it in the cloud. Open with the Cloud Run interface, click “Create Service” from the menu and configure your service. Next, select the container pushed to the cloud platform and click “Create” when you finish the setup. 5. Deploying the Application to Kubernetes The final step to schedule a Python script on Google Cloud is to create the service file and the deployment file. Kubernetes is common in automating Docker images and deploying them to the cloud. Orchestration tools use a language called YML to set up configurations and instructions that will be used to deploy and run the application. Once the appropriate files have been created, it’s time to use kubectl to initiate the last stage of the final stage to run Python on Google Cloud. Kubectl is a command-line tool that makes running commands against Kubernetes like deployments, inspections, and log visibility. It’s an integral step to ensure the Google Cloud run Python script runs efficiently in Kubernetes and the last leg of the migration process.To deploy a YML file to Kubernetes, run the following command:$ kubectl create -f example.ymlYou can verify that your files deployed by running the following command:$ kubectl get servicesExtra Credit The Easiest Way to Run Python In Google Cloud (Illustrated) Running a Python Application on Kubernetes Running a Python application on Kubernetes Google Cloud Run – Working with Python
For an ever-growing number of companies, cloud migration is quickly becoming a question of when not if. The potential benefits of cloud migration are undeniable, from long-term cost-cutting to better performance across key metrics. However, cloud migration is not a simple process. Many strategies are available for companies opting to migrate to the cloud, and each comes with its ideal conditions, potential benefits, and risks.Cloud migration also involves different considerations depending on the prospective cloud environment. As a result, choosing to migrate to Kubernetes will entail unique implications for a company's migrating resources. Read on for more information about the different cloud migration strategies and the key points to consider when preparing to migrate to Kubernetes. What is a Cloud Migration Strategy?A cloud migration strategy is a plan employed by an organization or a team to migrate applications and enterprise data from the original hosting system to the cloud. Cloud Migration TypesCloud migration can be broken down into six different types: Rehost, Re-Platform, Repurchase, Retain, Refactor, and Retire. The cloud migration type you choose depends heavily on several factors. For one, it should reflect the type of data you need to migrate. Additionally, teams also need to consider the size of the organization and workload level and the current digital environment. Then, once that data and those applications have been successfully migrated onto the cloud, teams can create a Kubernetes migration strategy to manage cluster traffic effectively. RehostRehosting is one of the most specific cloud migration types, best suited for placing virtual machines and operating systems onto a cloud infrastructure without any changes. Rehosting refers to "lifting" from the hosting environment and "shifting" to a public cloud infrastructure. When rehosted, resources can be recreated as IaaS analogs so that software will run on these resources on the new platform as before.Rehosting is an expected initial step for organizations starting a new migration. This migration strategy is low-resistance and well suited to specific constraints or deadlines or a circumstance that requires completing the job quickly. It can be fast and efficient in these circumstances. However, migrated applications often require re-architecture once rehosted, and sometimes applications rehosted wholesale retain poor configuration. Re-PlatformRe-platforming apps and data requires optimization of APIs and operating systems. For example, suppose teams are using multiple virtual machines to manage applications. In that case, a re-platforming migration strategy could allow them to switch to a platform with the ability to manage multiple workloads simultaneously, like Kubernetes. Replatforming to Kubernetes involves breaking resources down into distinct functions and separating them into their containers. Unique containers can be designed for each service. Containerizing these resources prepares them for optimal performance in the cloud environment, which can't be achieved as part of the rehosting process. RepurchaseRepurchasing is a similar migration strategy to rehosting and re-platforming but more focused. Repurchasing involves optimizing individual components of applications that otherwise migrate as-is. For example, switching an application from self-hosted infrastructure to managed services or from commercial software to open-source is common. Resources should be tested and then deployed and monitored for crucial performance metrics if migrated this way. RetainAfter assessing the current state of your legacy systems and in-use application, your team may determine that maintaining hybridity makes the most sense for modernizing application development and hosting. In this case, teams can employ a retention cloud migration model or a hybrid model to optimize in-use applications that need improvement without disrupting the systems performing effectively. RefactorRefactoring rearchitects software during the migration process to optimize it for performance in the cloud environment once migrated. While re-platforming cuts down on the architecture that will need to occur after migration, refactoring integrates the re-architecture into the migration process as completely as possible. Refactoring requires significant resource investment on the front end. However, it yields a greater return on investment in the long run: the resources invested upfront end up cutting down on costs once the migration is complete and software is performing at optimal levels. Applications also continually modify to adjust to new requirements. RetireWhen certain applications are not being put to valuable use, or if they have been made redundant by applications that provide the same services, these applications can be retired before cloud migration. This can be a step in the migration process. Preparing for migration via one of the migration strategies described above may require retiring specific applications. Assessing available software before migration and determining which applications can and can't be withdrawn can be beneficial when possible and convenient. Managing Applications With KubernetesCreating these multi-cloud and hybrid-cloud environments requires modernizing application management and the adoption of DevOps. Many organizations and teams with a cloud strategy manage their applications, workloads, deployments, and data with open source tools like Docker and Kubernetes. And while choosing Docker vs. Kubernetes is entirely dependent on the preference of the DevOps team, Kubernetes offers a level of scalability and flexibility that makes it one of the most popular container orchestration tools on the market. However, that's not to say that issues managing cluster traffic and migration don't occasionally occur, in which case creating a Kubernetes migration strategy can help. Creating a Kubernetes Migration Strategy to Avoid DowntimesCreating a Kubernetes migration strategy is similar to choosing a cloud migration strategy—the key to avoiding downtime when migrating applications is to act with awareness gradually. However, moving applications within a cloud-native architecture is not as simple as rehosting applications. There are a few key considerations to make to craft an effective Kubernetes migration strategy. Determine the Goal of the MigrationTo determine cloud migration goals, identify specific business drivers, and assess applications for migration based on priority. Cloud migration can yield all kinds of benefits, but some common goals of migration include increased speed and scalability of operations, better resources, lower costs, and improved customer service. In addition, for migrating to Kubernetes, it's essential to determine what should be modified - the application or the new environment. Thus, assessing the application for possible modifications, how it would benefit from Kubernetes, and the effort. Gather Information About Legacy ApplicationsWhen migrating applications, it's essential to take inventory of filesystems and network compatibility of existing applications. Any system migrating to a new cloud environment will host legacy applications of different values. Some of these applications will be worth retaining for the historical significance of their information, while others will need to be retired. Many applications can be modernized to perform more dynamically on the cloud and bring unique benefits to the cloud environment. Migrating these applications to the cloud can increase their speed and scalability and improve their intelligence and analytics. Individual legacy applications will likely need to be modernized differently, so each should be assessed to determine which cloud migration strategy will suit it best. Determine the Value of the MigrationIt's possible that after assessing the goal of the Kubernetes migration strategy and the compatibility of in-use applications, you determine that migration is not worth the effort. Coming to this conclusion requires a deep understanding of legacy applications and unpacking the data at hand. In addition, any cloud migration will involve some costs, so calculating these costs to determine the potential value of the migration is crucial. When preparing for migration, determine the cost of the migrated resources and evaluate eliminating expenses after migrating to the cloud. Before migrating, however, choose the potential value of legacy applications and what can be modernized or retired. An architecture like Kubernetes may not support some of these legacy applications, so knowing so beforehand will help minimize costs and maximize potential value down the line. Kubernetes is a powerful tool for modernizing applications and adopting cloud-native architecture to help some processes run more smoothly. Still, it's first essential to determine the feasibility of migration and whether or not the outcome is worth the effort. Many organizations have updated their systems with platforms like Kubernetes, but we're interested to hear what you have to say! Join more discussions just like this with the C2C Community.
Most customers I talk to today are excited about the opportunities modernizing their workloads in the cloud affords them. In particular, they are very interested in how they can leverage Kubernetes to speed up application deployment while increasing security. Additionally, they are happy to turn over some cluster management responsibilities to Google Cloud’s SREs so they can focus on solving core business challenges. However, moving VM-based applications to containers can present its own unique set of challenges: Assessing which applications are best suited for migration Figuring out what is actually running inside virtual machine Setting up ingress and egress for migrated applications Reconfiguring service discovery Adapting day 2 processes for patching and upgrading applications While those challenges may seem daunting, Google Cloud has a tool that can help you easily solve them in a few clicks. Migrate for Anthos helps automate the process of moving your applications - whether they are Linux or Windows - from various virtual machine environments to containers. There is even a specialized capability to migrate Websphere applications. Your source VMs can be running in GCP, AWS, Azure or VMware. Once the workload has been containerized, it can then be easily deployed to Kubernetes running in either a GKE or an Anthos cluster on GCP, AWS or VMware. Let’s walk through the migration process together and I will show you how Migrate for Anthos can help you easily and efficiently migrate virtual machines to containers The first step in any application migration journey is to determine which applications are suitable for migration. I always recommend picking a few low risk applications with a high probability of success. This allows your team to build knowledge and process while simultaneously establishing credibility. Migrate for Anthos has an application assessment component that will inspect the applications running inside your VM and provide guidance on the likelihood of success. There are different tools for Windows and Linux, and for Websphere applications we leverage tooling directly from IBM. After you’ve chosen a good migration candidate the next step is to perform the actual migration. Migrate for Anthos breaks this down into a couple of discrete steps. First, Migrate for Anthos will do a dry run where it inspects the virtual machine and determines what is actually running in the virtual machine. The artifact from this step is a migration plan in the form of a YAML file. Next, review the YAML file and adjust any settings you want to change. For instance, if you were migrating a database you would want to update the YAML file with the point in the file system to mount the persistent volume to hold the database’s data. After you’ve reviewed and adjusted the migration YAML, you can perform the actual migration. This process will create a couple of key artifacts. The first is a Docker container image. The second is the matching Dockerfile, and a Kubernetes deployment YAML that includes definitions for all the relevant primitives (services, pods, stateful sets, etc). The Docker image that is created is actually built using a multi-stage build leverating two different images. The first is the Migrate for Anthos runtime, the second includes the workload extracted from the source VM. This is important to understand as you plan Day 2 operations. This Dockerfile can be edited to update not only the underlying Migrate for Anthos runtime layer, but also the application components. And while not mandatory, you can easily manage all that through a CI/CD pipeline. If you want to ease complexity and accelerate your cloud migration journey, I highly recommend you check out Migrate for Anthos. Watch the videos I linked up above, and then get your hands on the keyboard and try out our Quiklab.
We pored through our community and curated the best resources for GKE, and added a few our teams curated. Pro tip? Bookmark this list and keep it at the ready. This will be your launchpad for GKE projects. From the Community First look at GKE Autopilot with Ahmet Alp Balkan, Google Cloud GKE troubleshooting, Bringing GKE Logs to the Cloud Console Autopilot in GKE Kelsey Hightower, Kubernetes, GKE Autopilot and Serverless Drive Conversation + video post Google Cloud announces the new GKE Autopilot...thoughts? Available here we have a lot of resources through our C2C Connect: France community. C2C Navigators: Pali Bhat, VP of Product and Design for Developer Products at Google All of the GKE resources that are currently available on the C2C Community platform. From Google CloudCheck out how Cloud Operations offers GKE logs with a new logs tab. Kubernetes - a rookie way from Eugen Fedchenko, the Goland Team Lead at Zoolatech State of Managed Kubernetes 2020 from Yitaek Hwang, a Senior Software Engineer at Axoni Moving to GCP: Cloud run or Kubernetes? - u/Appelpitje on reddit YouTubeWhat is Google Kubernetes Engine (GKE)? from Google Cloud Tech GKE Autopilot - Fully Managed Kubernetes Service From Google from the DevOps Toolkit Google Kubernetes Engine Autopilot Getting Started from Michael Levan, Skylines Academy Author, and Instructor From the Experts CluePhant Technologies wrote this great post on Medium.com about different modes of operation in GKE. Check it out here. Tomas Papez, a Google Cloud expert, wrote this great post focusing on common pitfalls, and some not-so-great aspects of GKE Autopilot. Check it out here. Injecting Secrets in GKE with Secret Manager, a storage system for credentials or sensitive data in general, by Alessio Trivisonno, a DevOps data architect From YouWhat resources do you use? What are we missing? Let us know! Comment below with your link and why it was helpful.
Led by community members Guillaume Blaquiere (@guillaume blaquiere) and Antoine Castex (@antoine.castex) with special guest Kelsey Hightower, principal developer advocate at Google Cloud, this C2C Talks event focused on GKE autopilot. Introduced in February 2021, Autopilot is the latest major feature on Google Kubernetes Engine (GKE). It offers a new way to use, consume, and deploy a Kubernetes cluster where the nodes, master, and pool, are fully managed by Google.Some key moments from this discussion include: The difference between Kubernetes and Google Kubernetes Engine (04:05) Defining Autopilot and Google’s vision for this new feature (6:35) How Autopilot stacks up against other container technologies (12:15) What makes a “cloud native” company (22:00) Getting started with Autopilot and pricing models (27:30) GKE Autopilot versus Borg (41:25) Web browsers and standardization at Google (49:40)
On February 24, Google Cloud introduced GKE Autopilot, a revolutionary mode of operations for managed Kubernetes that lets you focus on your software, while GKE Autopilot manages the infrastructure.With the launch of GKE Autopilot, you can now choose from two different modes of operation in Google cloud, each with your own level of control over your GKE clusters and the relative responsibilities related to GKE. Autopilot represents a significant leap forward by automatically applying industry best practices and eliminating all node management operations, maximizing your cluster efficiency and helping to provide a stronger security posture.Related links:Autopilot overview | Kubernetes Engine Documentation Introducing GKE Autopilot | Google Cloud Blog Google Cloud puts its Kubernetes Engine on autopilot Google Makes Kubernetes Invisible In The Cloud With GKE AutopilotRelated Videos: Tell us what you think in the comments:What are your thoughts?Have you tried it?Tips, tricks?What’s been the experience?Anything helpful we should know?
Pali Bhat is in charge of Google’s application modernization and developer solutions portfolio and in October, he and Sean Chinksi, Chief Customer Officer, discussed Anthos, Google Kubernetes Engine (GKE), and various other hot-topic issues.“As you think about your applications, you’ll see they’re the heart of your business and how you serve customers,” Bhat said. “They will become more germane and central to everything that your business does. And so, it's really important to have a platform that empowers all of your technology and application development teams to be proactive and to not have to worry about infrastructure, while still being secure and compliant and meeting the needs of your business.”Watch the whole conversation below.Did you catch his answer to our favorite question at C2C, “Imagine Google’s Product and Design portfolio is a 10-episode Netflix series. What episode are we on?” Share it below!
Whether you’re an experienced coder or an app development novice, software packages like Kubernetes and Docker Swarm are two great tools that can help streamline virtualization methods and container deployment. As you search for an orchestration tool, you will come across two common platforms: Kubernetes and Docker Swarm. Docker dominates the containerization world, and Kubernetes has become the de-facto standard for automating deployments, monitoring your container environment, scaling your environment, and deploying containers across nodes and clusters. When comparing Docker with Kubernetes, the main difference is that Docker is a containerization technology used to host applications. It can be used without Kubernetes or with Docker Swarm as an alternative to Kubernetes.While both architectures are massively popular in the world of container orchestration, they have some notable differences that are important to understand before choosing one over the other. Today, we’re discussing Kubernetes vs. Docker Swarm’s different containerization capabilities to help teams and engineers choose the exemplary architecture for their app development purposes. What Is an App Container? To fully understand the differences between Docker and Kubernetes, it’s essential to understand what is an app container. In software development, a container is a technology that hosts applications. They can be deployed on virtual machines, physical servers, or on a local machine. They use fewer resources than a virtual machine and interface directly with the operating system kernel rather than via hypervisor in a traditional virtual machine environment, making containers a more lightweight, faster solution for hosting applications. Application containers allow apps to run simultaneously without the need for multiple virtual machines in traditional environments, freeing up infrastructure storage space and improving memory efficiency.Many large tech companies have switched to a containerized environment because it’s faster and easier to deploy than virtual machines. Container technology runs on any operating system, and it can be pooled together to improve performance. What Are Kubernetes and Docker? Kubernetes and Docker Swarm are two popular container orchestration platforms designed to improve app development efficiency and usability. Both Kubernetes and Docker Swarm bundle app dependencies like code, runtime, and system settings together into packages that ultimately allow apps to run more efficiently.Kubernetes is an open-source container deployment platform created by Google. The project first began in 2014, while Docker Swarm was invented one year earlier by Linux in 2013 to improve app development’s scalability and flexibility. Still, both projects come with different architectural components with different app development capabilities that fuel the Kubernetes vs. Docker Swarm debate.Kubernetes Architecture ComponentsA critical difference between Kubernetes and Docker Swarm exists in the infrastructures of the two platforms. Kubernetes architecture components, for instance, are modular; The platform places containers into groups and distributes load among containers, alleviating the need to run applications in the cloud. This is different from Docker in that the Docker Swarm architecture utilizes clusters of virtual machines running Docker software for containerization deployment. Another main difference between the two platforms is that Kubernetes itself can run on a cluster. Clusters are several nodes (e.g., virtual machines or servers) that work together to run an application. It’s an enterprise solution necessary for performance and monitoring across multiple containers.ScalabilityAnother difference between Kubernetes and Docker Swarm is scalability. Should you decide to work with other container services, Kubernetes will work with any solution allowing you to scale into different platforms. Considered an enterprise solution, it will run on clusters where you can add nodes as needed when additional resources are required.DeploymentDocker Swarm is specific to Docker containers deploying without any additional installation on nodes. With Kubernetes, however, a container runtime is necessary for it to work directly with Docker containers. Kubernetes uses container APIs with YAML to communicate with containers and configure them. Load BalancingLoad balancing is built into Kubernetes. Kubernetes deploys pods, which comprise one or several containers. Containers are deployed across a cluster, and the Kubernetes service performs load balancing on incoming traffic.Docker Swarm Architecture ComponentsDocker Swarm architecture has a different approach for creating clusters for container orchestration. Unlike Kubernetes that uses app containers to distribute the load, Docker Swarm consists of virtual machines hosting containers and distributing them.ScalabilityDocker Swarm is specific to Docker containers. It will scale well with Docker and deploy faster than Kubernetes, but you are limited to Docker technology. Consider this limitation when you choose Docker Swarm vs. Kubernetes.DeploymentWhile the Docker Swarm architecture allows for much faster, ad-hoc deployments when compared to Kubernetes, Docker Swarm has more limited deployment configuration options, so these limitations should be researched to ensure that it will not affect your deployment strategies.Load BalancingThe DNS element in Docker Swarm handles incoming requests and distributes traffic among containers. Developers can configure load balancing ports to determine the services that run on containers to control incoming traffic distribution. Difference Between Docker and Kubernetes To recap, while Kubernetes and Docker Swarm have many similar capabilities, they also differ significantly in their scalability, deployment capabilities, and load balancing. Kubernetes vs. Docker Swarm ultimately comes down to an individual developer or team’s need to scale or streamline aspects of their containerization deployment, whether those processes would be better suited to a platform capable of speedy deployments like Docker Swarm or flexibility and load balancing like Kubernetes.When to Use KubernetesGoogle developed Kubernetes for deployments that require more flexibility in configurations using YAML. Because Kubernetes is so popular among developers, it’s also a good choice for people who need plenty of support with setup and configurations. Another good reason to choose Kubernetes is if you decide to run on Google Cloud Platform because the technology is effortlessly configurable and works with Google technology.Kubernetes is an enterprise solution, so its flexibility comes with additional complexities, making it more challenging to deploy. However, once you overcome the challenge of learning the environment, you have more flexibility to execute your orchestration.When to Use Docker SwarmBecause Docker Swarm was built directly for Docker containers, it’s beneficial for developers learning containerized environments and orchestration automation. Docker Swarm is easier to deploy, so that it can be more beneficial for smaller development environments. For small development teams that prefer simplicity, Docker Swarm requires fewer resources and overhead. Extra Credit https://searchitoperations.techtarget.com/definition/application-containerization-app-containerization https://www.docker.com/resources/what-container#:~:text=A%20container%20is%20a%20standard,one%20computing%20environment%20to%20another.&text=Available%20for%20both%20Linux%20and,same%2C%20regardless%20of%20the%20infrastructure. https://www.sumologic.com/glossary/docker-swarm/#:~:text=A%20Docker%20Swarm%20is%20a,join%20together%20in%20a%20cluster.&text=The%20activities%20of%20the%20cluster,are%20referred%20to%20as%20nodes. https://thenewstack.io/kubernetes-vs-docker-swarm-whats-the-difference/ https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
C2C Deep Dives invite members of the community to bring their questions directly to presenters.Google Cloud Run is quickly becoming one of the most engaging, albeit challenging, products to date. In this session, Wietse Venema (@wietsevenema), software engineer, trainer, and author of Building Serverless Applications with Google Cloud Run, provided a high-level understanding of what Google Cloud Run is, what the developer workflow looks like, and how to position it next to other compute products on Google Cloud.Explore the demo here.
Programming is in Cai GoGwilt’s blood. So when he developed the technology behind Ironclad’s AI-powered contracting solution, it felt like a full circle. “I was fortunate to be exposed to technology very early,” GoGwilt said. “My grandfather was a programmer before it was cool.” From creating games on TI-83 graphing calculators to programming computers as a kid, GoGwilt knew technology had the power to change lives, either by bringing joy or by creating efficient data processing. GoGwilt went on to study computer science and physics at MIT, where he also played cello in the university symphony orchestra. Soon he joined Palantir as a software engineer, where he worked in-depth with governments and large institutions. “I was particularly interested in the mission of bringing software to intelligence analysts,” GoGwilt said. “And got interested in legal technology because it’s an area where people could be helped a lot by adopting collaboration tooling.” GoGwilt met Jason Boehmig, who was working as a lawyer at Fenwick & West LLP, at a legal tech seminar. Together, they built Ironclad, with the vision of modernizing contracting, which has long been difficult, time-consuming, and messy. Their solution? Digital contracting. “Contracts are hard because they’re an inherently human thing,” he said. “There's no good software for negotiating or collaborating on a contract.” Also, as it turns out, lawyers are very similar to software engineers. “I think they think and approach problems very similarly,” GoGwilt said. “For example, the way that lawyers design contracts [is] very similar to the way that engineers think through code. We’re both constantly thinking about edge cases, about what could go wrong, and how we’re going to deal with those things. We’re thinking a lot about how to make something so elegant that it catches a lot of the wrong stuff that I can anticipate today and hopefully even some of the wrong stuff that I can’t foresee.” Ironclad has certainly created “something elegant” by changing contracting from a manual and disjointed black-box to streamlined and integrated data pipelines. Ironclad began developing its AI solution, among other capabilities, by using Google’s Kubernetes engine when it was still named Google Container Engine. As they continued to build their stack using Google products, they branched into Google AI. It was a smart move at the right time—just as the pandemic sent everyone scrambling. “A lot of companies are reevaluating their agreements and trying to figure out where they have commitments and where opportunities for the business are,” he said. “And being able to immediately auto-extract the terms of agreements is becoming critical.” Identifying gaps in the business and speed up decision-making is no longer a nice-to-have but a must-have. “Especially in the pandemic, having fast access to this kind of contract data has been critical to our customer base, including those in the healthcare industry who are on the frontlines of fighting the pandemic and those in the restaurant and transportation industries,” GoGwilt said. GoGwilt is also mindful of the human element as both the problem and the solution. “AI has great applications in terms of being able to accelerate understanding and extraction of information,” GoGwilt said. “But with that comes some risk of misunderstanding the information or lack of accuracy.” So, Ironclad pairs best-in-class AI with deep domain expertise about contracts, along with empathy for the end-user, to address such challenges. With their latest tool, Smart Import, “alpha users have been able to speed up contract upload by 50% and get three times as much contract data.” So what’s next? Simple.“We want to power the world’s contracts,” GoGwilt said. “That’s our mission.” Join C2C for a Navigator conversation on March 16 with GoGwilt and learn about how they’re using AI to power the world’s contracts and improve efficiencies. IronClad and GoGwilt will also be sharing the latest advances in contracting at their flagship summit, State of Digital Contracting, on March 25.
Going from concept and ideas to solutions that you can industrialize is the most significant challenge with AI on Google Cloud. It’s not the prototypes and test ideas; it’s getting it through the door, ensuring it's reliable, and it doesn’t put lives or livelihoods at risk or damage any equipment. Tackling the common-but-considerable challenge is Arnaud Hubaux, senior technical program manager at ASML, an equipment manufacturer and service provider headquartered in the Netherlands. An AI trailblazer that is self-proclaimed to be “customer-obsessed” blends physics with ML to create decision-making models that predict anomalies during microchip production. Being that he is only satisfied when trained models help customers reduce their spending and increase their yield, Hubaux sat down with the community for a C2C Talks* on AI and ML and shared his journey while answering questions live. What was the primary challenge? There are a few. ASML is global with 25,000 employees working in 118 countries, so there is a lot of management involved. But if that weren’t complex enough, its products also require working at the nanometer level, being precise but industrial and produced at scale with efficiency.For a little more context into the ASML world, the new A14 microchip, used in the new iPhone 12, is made by TSMC on ASML machines. In terms of that production, ASML owns 85% of the market share and with customers all over the world.So, ASML makes the machines that create the chips and uses AI to optimize the behavior of those machines. Each chip is composed of layers and pipes—think of Lego building blocks. Different machines create each layer, so they need to ensure accurate communication and synchronization for precise layering, and it needs to occur at scale. If any layer is amiss, the entire structure can collapse, resulting in a nonfunctioning chip. Additionally, Hubaux explained that ASML products are deployed in environments where there is no internet connectivity. Hence, there is no monitoring access on the system, meaning the AI will need to deploy and learn independently and adapt to any variation in the context.In a nutshell, they need to get their physics right, have their predictive economics precise, and ensure the ML model can learn efficiently and effectively.Hear the full explanation of the challenges below. How did you use Google Cloud Products as a solution? Hubaux explains how his team uses a collection of Docker containers to run atop Kubernetes, allowing for easy deployment on a customer premise with no internet connection. But to get there, they start from their on-premise data store, which will contain data samples from their customer sites, for example. The data goes into Google Cloud, where they’ve implemented a storage buffer, and it triggers a notification and a data pipeline, a Kubernetes cluster. The necessary data is extracted from the data packages and loaded into BigQuery, where domain experts can start working in their AI notebooks.Hear how the stack comes together in more detail below:And how's it going? Trusting your business on this technology and wanting to share your story is a powerful enough statement for how well it's going. Since so much of ASML’s product development depends entirely on this flow, Hubaux explained why they went with Google Cloud and chose this technology.“The reason why we went for a Kubernetes cluster for the data pipeline and the data flow or any Google-specific technology is because we want to have a pipeline that is easy to transfer from one environment to the next, being on-prem or in Google Cloud, or even running at another cloud vendor because that data pipeline has a lot of domain knowledge in it,” he said. Community Questions What did you face in building and training this? And how did you overcome them?“The biggest roadblock for us is that we don't know at all what data or what quantity or quality of data we'll see when we are at customer sites. It forces us upfront to think about all kinds of weird combinations of kinds of effects that can occur and then test for them and make sure we are robust,” Hubaux said. Hear his full answer below: How do your yields compare to other traditional semiconductor processes? “That's a difficult issue because how semiconductor manufacturers apply those techniques is extremely IP sensitive, and there is hardly any information out there,” Hubaux explained. “But what you see is that there is a general mistrust towards ML. If you cannot explain exactly how it works, for me, it's still black box that people don’t trust. This is why explainability is so important—is it's not about which techniques we use.” Hear his full answer below: Extra CreditThe Google AI-focused blog released its research recap for 2020 and looks ahead to 2021. It’s rich with detail, insights, and ideas. Need a crash course on ML? This training is geared toward developers and for those looking to get started. Looking to accelerate transformation with analytics, AI, and ML? C2C spoke with Google’s Andrew Moore in September, and it’s a discussion worth revisiting. Curious about ASML or to learn more about Arnaud Hubaux and how he works with Google Cloud products? You can hear him on the GCP Podcast!We want to hear from you! To learn with your peers in our C2C Connect groups, you can sign up here. We have a room for each of our primary topic areas. Explore to find what’s right for you. *C2C Talks are an opportunity for C2C members to engage through shared experiences and lessons learned. Often there is a short presentation followed by an open discussion to determine best practices and key takeaways.If you’d like to participate in a C2C event or have a topic idea, please join our community and contact Sabina Bhasin, content manager, with your ideas.
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK