Browse articles, resources, and the latest product updates.
The following article was written by C2C Global President Josh Berman (@josh.berman) as a member exclusive for TechCrunch. The original article is available here. In many ways, 2022 was a year of growth for the cloud technology space. Unpredictable macroeconomic developments saw many organizations thinking about and preparing for greater wins in the years to come instead of right away.In 2023, much of this preparation could come to fruition as the growth achieved in 2022 contributes to a stronger economy and rapid advancements, particularly in tech.Global IT spending is projected to climb by 5.1% to $4.6 trillion in 2023, according to Gartner, driven by a 11.3% increase in investments in cloud applications to $879.62 billion. What does this kind of increased spending and investment mean for organizations? C2C Global, a Google Cloud customer community, has identified five cloud trends to watch in 2023. “Moving forward, custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm.” AI and ML tech adoption will rise Every organization wants to harness the many and varied capabilities of AI and ML technology. Some want to use their data to enhance analytics and build predictive models, and others want to automate repeatable processes.Currently, many AI and ML models require extensive testing and training before they can be implemented at scale across large organizations hosting petabytes of data or serving wide customer bases. In fact, C2C’s research has found that only 47% of respondents are currently using AI and ML. However, these technologies ranked high among the ones that respondents hope to adopt in the future.The promise of these technologies is too significant to ignore. As models are refined, and training and testing become more reliable and automatic, organizations will come to rely on these technologies more. We’ll see more low-code/no-code app development platforms Partly due to the rush to adopt AI and ML technologies that still require a lot of maintenance to perform reliably at scale, development teams are likely to implement low-code and no-code applications to reap the benefits of these technologies without the burden.For skilled developers, low-code and no-code options promise a lower barrier to entry for introducing and managing complex models. Significant savings in terms of time and cost, as always, will also be a massive draw. More organizations will host resources in multicloud environments Every cloud strategy requires delicate analysis to determine the proper balance of cost, efficiency, performance, scalability and security. For a lot of organizations, sticking with a major cloud provider promises attractive savings that make a lot of practical sense.However, as cloud technology grows, individual products will be just as attractive to companies prioritizing scaling and transformation. Moving forward, even for companies using one cloud provider, adopting and implementing new resources from other providers may add value, and custom solutions, rather than one-size-fits-all offerings from individual providers, will increasingly become the norm. Remote work tools will continue to improve While remote work emerged during the pandemic as an emergency measure, the tools developed to accommodate it are now available as part of the expanded landscape of hybrid work technology. As AR and VR technology become more viable, organizations will continue to introduce and adopt new means of building a work environment that suits the needs of a diverse and changing workforce. Cloud adoption will increase in formerly resistant sectors Until recently, organizations in the government and financial services used to resist transformation due to the risk and burden of retiring entrenched legacy systems and migrating massive amounts of data. Lately, though, the advantages of cloud adoption have been harder to ignore, and more organizations in these industries are adapting accordingly.For example, the U.S. Army recently said it would start using Google Workspace for its personnel operations. This expansion into previously less served areas of the cloud market speaks volumes for cloud adoption.
When building assets like applications, databases, or AI/ML interfaces on Google Cloud, should you prioritize keeping costs down or building the best architecture you can? With the right strategies in place, you don’t have to choose between the two. Patrick Booher, SVP Cloud Solutions at Zazmic, Inc., and Anil Sharma, CEO and Founder of Trillo, are two executives in the Google Cloud customer community working on a daily basis to help their own customers find solutions that balance architectural planning with cost optimization. In this C2C Deep Dive, Anil and Patrick review a wide variety of practices supporting these solutions, as well as real-world stories of the customers putting them to successful use. Watch the full recording below, and use the list below to navigate to the topics most relevant to you: (1:00) Introduction, Objectives, and Challenges (5:30) Writing Requirement Specifications (8:30) Discover Architectural Patterns (11:45) Isolated Functions and Centralized Data (21:55) Functions and Services (34:50) Non-Functional Requirements (38:00) Cost Optimization (41:05) Utilization (45:20) Cost Controls (49:20) Thank you and Q&A
On Thursday, March 10, C2C DACH Community Manager Dimitris Petrakis (@Dimitris Petrakis) hosted a powerful event with Patrizia 'Pati' Jurek (DevRel Regional Lead DACH, WTM Europe Lead, Google) focusing on the different Google Developer Communities. 60 Minutes in 60 seconds (3:05) Who We AreJurek began her presentation by explaining what the DevRel (Developer Relations) team really is: an on-the-ground network of developers overlooking engineering programs and community managers who drive various global programs that follow the “1:few:many” model. (4:20) What We DoThe main goal of DevRel is to nurture influencers and their communities everywhere to boost Google technology advocacy, adoption, quality, and perception. (5:21) How do Google Developers support communities?Google Developers support communities through learning, mentoring, and business building. The community is very diverse, with people coming from a huge variety of different backgrounds, such as enterprises, startups, and etc. They partner with communities, Women in Tech leads, Google technology experts, startups, and more to provide them with the resources and guidance they need to be successful in building on Google. (7:43) Video Presentation: "Google Developers: Community Connect 2021After her initial overview, Jurek shared a short video to give attendees a better understanding of what it means to be a part of this bigger community. (11:46) Google Developers: Developer Ecosystem TeamThe DevRel team spans 30 countries and connects with developers in over 140. Jurek presented analysis on these numbers, as well as the benefits gained by further developing communities worldwide and by engaging with top startups in strategic and up-and-coming markets. (14:00) Community ProgramsJurek introduced the different Google Community programs––GDG (Google Developer Groups), GDSC (Google Developer Student Clubs), GDE (Google Developer Experts), and WTM (Women Techmakers)––and then explained in detail their statistics and numbers (countries, groups, events annually, developers reached, content reads, public speaking events and workshops, ambassadors, women in tech reached) as well as the events they host, where they are organised, when, by whom, and what they contain. (22:08) Why does Google have Developer Groups?Three words: Connect, Learn, Grow! Developer community is about meeting other local developers and those interested in developer technologies, learning about a wide range of technical topics and new skills, and applying new learnings and connections to build great products and advance your skills, career, and network. (28:43) Google Developers ExpertsGoogle Developer Experts are a global network of highly experienced technology influencers who actively support developers, companies and communities. GDEs are independent volunteers who do not work for Google in any capacity. (47:34) Google Developer Student ClubsGDSCs are university based community groups for students interested in Google's developer technology. (52:25) Women TechmakersWTM engages over 100,000 women in tech across 190 countries each year. WTM provides visibility, community and resources for women in technology across all career levels to drive innovation and participation in the industry. (56:44) Become a Google Cloud Developer HeroGoogle Developer Heroes showcase and celebrate the innovation and career development of their teams, meet and exchange ideas with Google execs, cloud solution experts and product teams, and join Google tech communities or become Experts to grow skills, mentor fellow developers, and partake in exclusive Google projects. Watch the full recording of the event below: Extra Credit
It’s tough to imagine a time when developers released their software without a preview at the end of their negotiated terms. Client dissatisfaction, miscommunications, and faulty code were only a few of the problems developers commonly faced.2001 saw the publication of the Manifesto for Agile Software Development, which suggested that DevOps teams split work into modules and commit each bit of code to a repository, where the entire team reviews the code for bugs or errors in a process called continuous integration (CI). In this model, once that code is vetted, it is passed on for continuous delivery (CD), where it is mock-tested, before being submitted for continuous deployment (CD), to ensure flawless results.Since CI/CD is consistent and continual, some DevOps teams focus on CI, while others work on CD to move that code further down the pipe. CI in shortOnce a day, or as frequently as needed, developers commit––or “continuously integrate” (CI)––their built codes in a shared repository, where the team checks it for errors, bugs, and the like. Tests include: Unit testing that screens the individual units of the source code; Validation testing that confirms the software matches its intended use, and; Format testing that checks for syntax and other formatting errors. TL;DRAfter successful continuous integration, the code moves on to continuous delivery or release, followed by continuous deployment (CD).Both CI and CD are rinse-and-repeat situations, looping endlessly until DevOps is satisfied with the software.Check out a handful of popular CI/CD environments; which one do you use? Jenkins, an OS solution CircleCI Gitlab Github Actions TeamCity, a proprietary solution. CI/CDWhether organizations add CD onto CI depends on the size of the organization. Complex organizations like the Big 3 (Google, Amazon, and Facebook) use all three stages. Smaller organizations, or those with fewer resources, mostly leave out either the continuous deployment or the continuous delivery stage, preferring the more fundamental continuous integration module. CI helps DevOps practice fault isolation, catch bugs early, mitigate risks, and automate successful software roll-out. Why Use CI/CD?CI/CD helps software developers move flawless products to market faster. Benefits include the following: Manual tasks are easily automated. Minor flaws are detected before they penetrate the main codebase. CI/CD helps prevent technical debt, where overlooked flaws turn into expensive remediation down the line. CI/CD processes can shorten cycle times. The CI/CD process potentially makes software developers more productive, efficient, and motivated. Considerations include the following: CI/CD requires a talented DevOps engineer on your team, and these engineers are expensive. CI/CD requires proper infrastructure, which is also expensive. Few legacy systems support the CI/CD pipeline, so you may have to build your IT from scratch. CI/CD also requires rapid response to alerts - like integration deadlines and issues raised by the unit team - for the system to work as it should. Other options: If the hassle and expense seem overwhelming, there are alternatives. Consider the shortlist below: AWS CodePipeline Spinnaker Google Cloud Build Buddy DeployBot Bottom Line:According to a recent Gartner report, CI/CD has become a staple for top-performing companies during the last three years, with 51% of these organizations ready to check these agile SaaS developments within the next five years. CI, too, remains more popular than CD, especially for smaller organizations.For Google Cloud users like us, Cloud Build, with its particular CI/CD framework that works across various environments, is ideal. It helps us improve our lead time (namely, decreasing the time it takes from building to releasing the code), boosts our deployment frequency, reduces Mean Time to Resolution (MTTR), and dramatically reduces the number of potential software defects.Cloud Build can include VMs, serverless, Kubernetes, or Firebase, and helps us build, test, and deploy software quickly and at scale. So, how have you used Cloud Build or CI/CD? Did any of this resonate? I’d love to hear your story!
Application programming interfaces (APIs) simply connect apps. Think of them as a scalable application interface, with diverse code running on a server where all of that code needs to fit together perfectly for any particular application to work. Example: The Uber application comprises various interfaces: one for payment, another for calculating your current location, a third for handling the review you leave at the end of the trip, and so forth. Like a microservice, each API focuses on its aspect and “communicates” with other Uber APIs. When communication is fluid, and all the Uber APIs seamlessly interconnect, the Uber application works. API Design The first step in creating an API is to ask yourself: Why do you want to make that particular API - or why do you need it? What do you want it to accomplish? What’s your process of execution? API design is followed by API productization, which is the process of creating the actual API. An effective API is: Easy to use Easily understood or attractive in its simplicity Clear and concise, with “just enough” data to do its job. API ManagementThe larger and more complex your application, the more APIs you’re likely to have (think of microservices). All of these APIs need to be secured, observed, updated, scaled (as the enterprise or application grows), cataloged, and retired when no longer necessary. Some core components of API management are: API gateway, an API management tool that makes it easy for developers to design, develop, maintain, monitor, and secure APIs. API dashboard, a space where you can observe the health and usage of your APIs to troubleshoot issues. API catalog, a library of the relevant APIs for organizing, managing, and sharing these APIs with relevant developers and end-users. API SecurityHackers have a wonderful time with APIs. After all, applications such as Facebook or Tinder, which reveal your deepest confidences, are commonly used and are open to vulnerabilities, like excessive data exposure, incorrectly applied authentication mechanisms, and code injection.Most APIs are composed of either: SOAP (Simple Object Access Protocol) - A highly structured message protocol that uses built-in protocols, or Web Services Security, to shield APIs. Organizations with highly sensitive data prefer SOAP. REST (Representational State Transfer) - A more straightforward approach that uses HTTP/S and Transport Layer Security (TLS) encryption for security. Common API Terms Web API security - transfer of data through APIs that are connected to the Internet API Keys - requests made to an API which require a specific key for authentication GraphQL - a popular query language used for designing and developing effective APIs JSON - the language that the applications use to “talk” to one another Wrap-Up APIs)are simply applications that communicate with one another to answer your commands and give you access to required data. When it comes to Google, examples include Search, Gmail, Translate, or Google Maps. Google Cloud APIs help you create, deploy, and manage APIs on the Cloud through your favorite API language. Do you use APIs? Which are your favorites? Reach out and let us know!
From chatbots to predictive text, all kinds of applications are using AI to navigate language barriers and facilitate communication across different communities. Many of these applications focus on text, but there is more to language than written words. Sometimes even fluent speakers of a second language will experience challenges when communicating face-to-face with native speakers. One of the best ways to overcome these challenges is to practice pronunciation.Markus Koy (@MarkusK) is an IT projects analyst with 18 years of experience across various industries. He is also a native German speaker living in an English-speaking part of Canada, and a regular visitor to C2C’s AI and ML coffee chats, which are hosted in the U.S. Koy’s experiences working in English-speaking countries as a non-native English speaker inspired him to create thefluent.me, an AI-powered app that tests speech samples and scores them based on how well they correspond to standard English pronunciation.On thefluent.me, users record themselves reading samples of English text (usually about 400 characters long), and then post them either publicly or privately on the app’s website. Within about 30 seconds, the app delivers results, reproducing the text and indicating which words were pronounced well and which can be pronounced better. Even native English speakers may find that they can improve their pronunciation, sometimes even more so than someone who speaks English as a second language.We recently approached Koy with some questions about thefluent.me, Google Cloud products, and his experience with the C2C Community. Here’s what we learned: What inspired you to develop thefluent.me? Koy began working on thefluent.me after contributing to a research project with an international language school. As a second-language English speaker himself, he had already taken the International English Language Testing System; he had found pronunciation to be the hardest part of the process.“Immediate feedback after reading a text is usually only available from a teacher and in a classroom setting,” he says. Teachers only listen to a speaker’s pronunciation once, and will likely not provide feedback on every word. Tracking progress systematically is just not feasible in a classroom setting, and sometimes non-native speakers will feel intimidated when speaking English in front of other students.Koy continued his research on AI speech-recognition programs and also graduated from Google’s TensorFlow in Practice and IBM’s Applied AI specialization programs. He decided to build thefluent.me to help students struggling to overcome these challenges. What makes thefluent.me unique? There are many apps on the market for students studying English as a second language, and thefluent.me is not the only app of this kind that uses AI for scoring. However, apps combine different features to support distinct learning needs. Koy kept these concerns in mind when designing and building the following features for thefluent.me: Immediate pronunciation feedback: The application delivers AI-powered scoring for the entire recording and word-level scoring on an easy-to-understand scale. Immediate feedback on reading speed: Besides pronunciation, the application provides feedback on the reading speed for each word. Own content: Users can add posts they would like to practice instead of using content only published by platforms. They can immediately listen to the AI read their post before practicing. Progress tracking and rewards: Users can track their activities and progress. They can revisit previous recordings and scores, check their average score, and earn badges. Group learning experience: By default, user posts are not accessible to others. However, users can also make their posts public and invite others to try, or they can compete for badges. How do you use the Google Cloud Platform? Do you have a favorite Google Cloud product? Koy runs thefluent.me on App Engine Flexible. He likes how easy the deployment process is, especially when managing traffic between different versions. Two key Application Programming Interfaces (APIs) Koy is using are Speech-to-Text and Text-to-Speech, which Koy says allow the Wavenet voices to sound more natural. He also likes that both allow him to choose different accents for the AI speech. Koy is also using Cloud SQL and Cloud Storage, which he finds easy to integrate. What do you plan to do next? “There are many other items for horizontal and vertical scaling on my roadmap,” Koy assures us. He is planning to add additional languages and enhance the app’s group features. He has also been approached by multiple companies who want to use thefluent.me for education and training. Koy plans to publish APIs to accommodate these requests in the coming weeks. Why did you choose to join the C2C community? Like so many of our members, Koy joined the C2C community to meet people and collaborate, but his experience here has informed his work on thefluent.me beyond friendly conversation. Recently, a community member expressed to Koy that thefluent.me is an ideal tool to use when preparing for a job interview—a user can rehearse answers to interview questions to learn to pronounce them better. For Koy, this is not just nice feedback; it is also a use case he can add to his roadmap.Still, community itself is enough of a reason for Koy to return on a weekly basis. “Mondays are just not the same anymore without our AI and ML coffee chats,” he says.
As the process of app modernization becomes a more widely accepted, streamlined way to update legacy and existing applications, it’s only natural that developers and app engineers alike discover an equally streamlined and concise way to describe an otherwise complicated process. You may have heard someone refer to the app development phases in 4 simple terms: define, design, develop, and deploy.Each step in the application development process––from design to deployment––is critical to success. Both small and large apps need the right strategy to offer all necessary features to users and ensure development is cost-efficient to businesses. After deployment, applications must be maintained. By following the four stages of modern app development, maintenance overhead will be reduced and run more smoothly. Whether you’re not yet familiar with the four stages of modern app development or you’re a seasoned app designer, we invite you to join the conversation and share your thoughts about this tried-and-true process of app modernization. What is Application Design and Development But first, what is application design and development? And how do these two steps fit into the web app development process as a whole? Application design and development are two relatively straightforward concepts related to the process of building a computer program to perform certain business tasks, like budget forecasting and management, generating reports, and more.Application design and development are arguably the most important components in a successful application. All phases of application development start with a plan. The planning stage will drive application development and design and ensure that all important features are included so that end users are satisfied with results. While design and development are both integral parts of the overall web app development process, they are only two of four stages needed to complete the total equation. Define, Design, Develop, Deploy While there are many variations of the web app development process that include anywhere from four to seven steps, we identify the following stages of modern app development: define, design, develop, and deploy. These four words make it possible for both the seasoned cloud engineer and the coding novice to speak the same language and share an understanding of an otherwise complex process. DefineThe “define” stage of the web app development process, sometimes referred to as “pre-design”, is the stage in which the app engineer and the client meet to nail down the objective of their project. In large enterprise businesses, a project manager might work through the define stage of the process with clients. Features, UI elements, business logic, and data that must be stored are defined during this phase, and all necessary application elements are communicated to developers. DesignThe design development stage is one of the most multifaceted steps in modern app development. At a high level, this phase often involves determining user flow, creating detailed wireframes for developers to begin coding, and finalizing the UI appearance. A project manager will work with designers to flesh out necessary UI elements and approve the layout before it’s incorporated into the application.The user interface also includes user experience (UX) elements. The UI and UX work together to ensure a convenient flow for anyone who uses the application. UI is the elements that make up the layout, and the UX is the ease of flow and ability for users to intuitively engage with features. DevelopOnce the team has agreed on the initial design and strategy, the project can move into the most hands-on of the app development phases: development. In development, developers begin writing the code for the software product based on the final wireframes and design. This is also where teams can factor in the development methodology of their choice, like agile, waterfall, or RAD. Agile is the most common methodology used in modern application development. The methodology used will determine the way development is carried out and how change requests are incorporated into development flow. Every developer on the team is assigned a feature to code in the application. The developers determine the effort necessary to code the feature, and the time is reported so that project managers and clients have an estimated time for completion. DeployMany development teams will incorporate testing into one of the stages of app development to identify any bugs and ensure the code is ready to go live and handle the requests of real users. Once testing is completed as part of the last step of the development phase, it’s time to shore up the web app development process and deploy the app on the appropriate server. The deployment phase of application development is the very last stage, and only happens after client approval. Deployments can be automated or manual. Continuous integration and continuous deployments (CI/CD) tools will automatically test and deploy applications, but most automation is created after a single manual deployment to identify and remediate any bugs.Server equipment, including cloud infrastructure, must first be provisioned. Databases, servers, storage space, and networking equipment must be provisioned before developers deploy the application. Once the application is ready to be deployed, a plan is created to determine the best way to upload the application to production without interfering with user productivity. Downtime must be limited so that users can still work normally. This might require deployment to happen during off-peak business hours. What Our Experts Say About Define, Design, Develop, Deploy Our experts promote all four stages of modern application development, from the planning stage to the final deployment phase. We want to see your application become a success, so we help streamline your development procedures. Our cloud connections help bridge the gap between your ideas and a community of people who can support your development process.Scaling the enterprise with new application ideas and provisioning cloud components to support them is no easy task. Our community of experts can answer questions and support your new ventures. The cloud is a vast environment with hundreds of options, and it’s difficult to identify the right ones for your business. We can make these decisions easier for you by helping you connect your application with the necessary cloud infrastructure.Join our community of experts and find out how you can more easily navigate the waters of all four modern application development stages, so you can ensure a successful launch of your next software idea.
Don’t worry if you haven’t had the chance to join us yet for one of Fulfilld CTO and co-founder Michael Pytel’s Deep Dives. We record all of these sessions, including Pytel’s most recent Deep Dive, about Fulfilld’s Automated Monitoring and Microservices. Pytel told us in his first Fulfilld Deep Dive that “every modern application today is built on a microservices architecture.” This session explores how Fulfilld uses automated monitoring to ensure that its microservices are reliable and with all the functionality they promise. In Pytel’s own words, “the most important feature of any system is reliability.” Fulfilld monitors application performance, application errors, cloud connectivity, and customer connectivity, looking for “four golden signals”: latency, traffic, saturation, and errors. All metrics and insights collected via monitoring are logged and tracked to allow for profiling and debugging. All of this is accomplished on the Google Cloud Platform, in a multi-project environment comprising Cloud Functions, Cloud SQL, Cloud Run, and other Google Cloud products. The presentation also includes a slide detailing how Fulfilld uses Flutter Fire for mobile monitoring. Pytel will be joining us on October 12 for another Deep Dive, all about using Natural Language to build a digital assistant. However, his insights in the first sessions in this series are crucial to understanding how Fulfilld is disrupting the warehouse management space with a robust microservices architecture, intuitive UI, and a next-generation digital assistant. Watch the most recent session below, and check out the links in the Extra Credit section. We recommend getting caught up before the next event. Register here for Scaling an Enterprise Software: Digital Assistants and Natural Language Extra Credit: Microservices: The Journey So Far and Challenges AheadC2C Deep Dive Series: User Experience Design to Build with FulfilldC2C Deep Dive Series: Scaling an Enterprise Software with Fulfilld
The Big Question—Can You Use Python in the Cloud? Python is an excellent tool for application development. It offers a diverse field of use cases and capabilities, from machine learning to big data analysis. This versatility has allowed the creation of a real niche for Python cloud computing. And now, as DevOps becomes more and more cloud-based, Python is also making its way into cloud computing as well.However, that’s not to say that running Python can’t come with its own set of challenges. For example, applications that perform even the simplest tasks need to run 24/7 for users to get the most out of their capabilities, but this can take up a lot of bandwidth—literally.Python can run numerous local and web applications, and it’s become one of the most common for scripting automation to synchronize and manipulate data in the cloud. DevOps, operations, and developers use Python as a preferred language, mainly for its many open-source libraries and add-ons. It’s also the second most common language used on GitHub repositories. Today we’re talking about running Python scripts on Google Cloud and deploying a basic Python application to Kubernetes. How to Use Google Cloud for Programming Businesses all over the world can benefit from cloud options. Both cloud-native and hybrid structures have technological benefits like data warehouse modernization and levels of security compliance that help fortify the development process and run continuously. But running code on Google Cloud requires a proper setup and a migration strategy—specifically a Kubernetes migration strategy—if you intend to orchestrate containerization.Generally speaking, however, any code deployed in Google Cloud is run by a virtual machine (VM). Kubernetes, Docker, and even Anthos make application modernization possible for large applications. In the case of smaller scripts and deployments, a customizable VM instance is adequate for running Python script on Google Cloud and determining processor size, the amount of RAM, and even the operating system of choice for running applications. 1. Check the Requirements for Running Python Script on Google Cloud Before you can work with Python in Google Cloud, you need to set up your Python development environment. After that, you can code for the Python cloud environment using your local device, but you must install the Python interpreter and the SDK. The complete list of requirements includes: Install the latest version of Python. Use venv to isolate dependencies. Install your favorite Python editor. One popular Python Integrated Development Environment (IDE) is PyCharm. Install the Google Cloud SDK (gcloud CLI) for Python to access Google Cloud Install any third-party libraries that you prefer. 2. Google Container Registry and Code Migration To begin scheduling Python scripts on Google Cloud, teams must first migrate their code to the VM instance. For Python VM setup, many experts recommend using Google Container Registry for storing Docker images and the Dockerfile.First, you must enable the Google Container Registry. The Container Registry requires billing set up on your project, which can be confirmed on your dashboard. Since you already have the Cloud SDK installed, use the following Python gcloud command to enable the registry:gcloud services enable containerregistry.googleapis.comIf you have images from third-party images, Google provides step-by-step instructions with a sample script that will migrate to the Registry. You can do this for any Docker image that you store on third-party services, but you may want to create new projects in Python that will be stored in the cloud. 3. Creating a Python Container Image After you create a Python script, you can create an image for it. A Docker image is a text file that contains the commands to build, configure, and run the application. The following Docker example shows you the content of a Dockerfile used to build and image:# syntax=docker/dockerfile:1FROM python:3.8-slim-busterWORKDIR /appCOPY requirements.txt requirements.txtRUN pip3 install -r requirements.txtCOPY . .CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]After you create the image, you can now build it. Use the following command to build it:$ docker build --tag python-dockerThe --tag option tells Docker what to name the image. You can read more about creating and building Docker images here.After the image is created, you can move it to the cloud. You must have a project set up in your Google Cloud Platform dashboard and be authenticated before migrating the container. The following command migrates the image to Google Cloud Platform:gcloud build submitThe above basic commands will migrate a sample Python image, but full instructions can be found in the Google Cloud Platform documentation. 4. Initiating the Docker Push to Create a Google Cloud Run Python Script Once the Dockerfile has been uploaded to the Google Container Registry and the Python image has been created, it’s time to initiate the Docker push command to finish the deployment and prepare the storage files. A Google Cloud run Python script requires creating two storage files before a developer can claim the Kubernetes cluster and deploy it to Kubernetes.The Google Cloud Run platform has an interface to deploy the script and run it in the cloud. Open with the Cloud Run interface, click “Create Service” from the menu and configure your service. Next, select the container pushed to the cloud platform and click “Create” when you finish the setup. 5. Deploying the Application to Kubernetes The final step to schedule a Python script on Google Cloud is to create the service file and the deployment file. Kubernetes is common in automating Docker images and deploying them to the cloud. Orchestration tools use a language called YML to set up configurations and instructions that will be used to deploy and run the application. Once the appropriate files have been created, it’s time to use kubectl to initiate the last stage of the final stage to run Python on Google Cloud. Kubectl is a command-line tool that makes running commands against Kubernetes like deployments, inspections, and log visibility. It’s an integral step to ensure the Google Cloud run Python script runs efficiently in Kubernetes and the last leg of the migration process.To deploy a YML file to Kubernetes, run the following command:$ kubectl create -f example.ymlYou can verify that your files deployed by running the following command:$ kubectl get servicesExtra Credit The Easiest Way to Run Python In Google Cloud (Illustrated) Running a Python Application on Kubernetes Running a Python application on Kubernetes Google Cloud Run – Working with Python
Facilitating and simplifying workflow management requires a range of solutions, especially in the warehouse space. Warehouse workers and managers need to track tasks, monitor activity within the warehouse environment, and locate and move packed items. Furthermore, they need to do it all constantly throughout the day. Fulfilld is an app that provides solutions for all of these problems and more. However, for the average warehouse manager or worker to use an app like Fulfilld to streamline and optimize their day-to-day work, the app needs more than just good technology. It also needs a design that makes user interaction (UI) accessible, intuitive, and straightforward. For the second event in our Deep Dive series featuring Michael Pytel, co-founder and CTO of Fulfilld, Pytel gave an entire presentation on Fulfilld’s UI design. In the first event in this series, Pytel explained how the app’s different functions address the needs associated with the other areas of the warehouse workflow. This time, Pytel focused on the user experience that makes these functions easy to use for warehouse workers opening the app for the first time. In addition, UI principles like interactive task management, workplace persona modeling, and human-centered design allow Fulfilld to bring their tech directly to the workers who need it. After a brief introduction to Fulfilld and its mission and capabilities, Pytel began with a bang and made a playful dig at the UI design of one of Fulfilld’s competitors. By doing so, Pytel demonstrated the limits of a user experience that relies on tribal knowledge. Fulfilld’s UI, by contrast, is designed to be intelligible to any first-time user. Commands are broken down by task and presented as an interactive list. The tasks are “organized by location and priority,” and then each is assigned to the appropriate team member. The list of tasks continually updates throughout the day as workflow changes. Watch Pytel explain in detail below: Fulfilld’s UI is also built to suit different workplace personas so that the app can provide a customized UI depending on the role of the warehouse employee using it. Fulfilled currently has four different personas mapped. Watch Pytel compare and contrast the “warehouse worker” and “warehouse manager” personas here: In the second half of the presentation, Pytel gave demos of several processes Fulfilld uses to build its UI. First take a look at how they’re using Flowmap for user story development and site mapping: Fulfilld also creates low fidelity and high fidelity mockups using Figma, which organizes screens into folders, provides code for every element of design, and creates a clickable, playable demo version of the site. Figma can also present every possible navigation link on the site simultaneously as a “spaghetti map”: In the last section of the presentation, Pytel outlined the Fulfilld UI code to deploy sequence with a demo on the Google Cloud Platform (GCP) console, where Fulfilld uses different cloud-based applications to host and deploy code and link to the code directly between Github, the GCP, and Fulfilld.io: To wrap up, Pytel gave a quick recap of the entire UI process, starting with feature and function design and ending with code deployment, remarking that seeing an entire process from start to finish is “pretty cool for a 45-minute webinar.” What did you think? Are you a warehouse worker or an app developer working on your UI design? What else would you like to learn about the UI design of an app like Fulfilld? Let us know in the comments, and make sure to check out the rest of the events in our Deep Dive series on Fulfilld with Michael Pytel.
Michael Pytel (@mpytel), co-founder and CTO at Fulfilld, shares stories from the team’s wins and losses in building out this intelligent managed warehouse solution.The recording from this Deep Dive includes:(3:10) Introduction to Fulfilld (6:00) The team’s goals for user interaction, including recommended actions, building flexibility, and determining personas (8:00) How to get to a modern, intuitive user interface (11:00) Developing personas and some examples of Fulfilld’s personas (14:20) Demo: road map and feature planning (16:50) Human-centered design and Fulfilld’s design process (18:40) Demo: user stories and site mapping (22:10) Demo: low-fidelity and high-fidelity mockups (26:10) From design to build (29:05) Material UI (31:05) From code to deploy (35:30) Demo: Google Cloud Platform console (40:40) Open community questionsCommunity Questions AnsweredDoes your code model your process closely? How did you use Flowmapp to build your business process engines? How useful is the Figma heat map generated from user demos? How many personas have you developed to successfully identify functionality use cases for your application?Other ResourcesGoogle Cloud Platform Architecture Framework Google Cloud Hands-On Labs on Coursera Products used by Fulfilld: Flowmapp (User Stories) Aha! (Product Roadmap Software) Figma (UX Design) Material UI User Interaction and Design ConceptsFind the rest of the series from Fulfilld below:
In essence, Google Cloud Pub/Sub is a real-time messaging system, perfect for those large companies with modern microservices that want to integrate their services. GC Pub/Sub scales with platform growth and helps you distribute analytics and data to or from some or more stakeholders, any time, anywhere. Publishers post messages to stakeholders “fork-down,” meaning from one to many. Subscribers, namely the stakeholders or those who subscribe to the platform, post “many to one”––namely to the Publisher. With GC Pub/Sub, you can reach huge amounts of people synchronously, or decouple your posts for targeted recipients. Projects get done faster, more reliably, and more effectively, with no loss of downtime or jitter to the company. Today, more than 173 companies use Google Cloud Pub/Sub, including HENNGE K.K., Client Platform, and PLAID. Why Do We Need GC Pub/Sub?GC Pub/Sub is the only model that accommodates both push- and pull-based supply chain models.Push-based models are those industries that use analytics to forecast customer demand and supply accordingly. Pull-based models, in contrast, wait for customer demand to flow in before supplying for those needs. The two most notable push or pull-based models on the market are: Amazon Simple Notification Service (SNS), where people “push” or send SMS text messages to subscribers, and, Amazon Simple Queue Service (SQS), where subscribers “pull” stored SMS text messages from publishers. GC Pub/Sub services both niches.Alternately, vendors/ operators can use synchronous (or asynchronous) remote procedure calls (RPCs) for communicating between their different microservices. But while GC sends events to all that need to receive them without regard to how or when these events will be processed, RPCs tend to be time-consuming, since publishers must wait for subscribers to receive the data. Vocabulary Topic – The shared string where members discourse through a common thread. Publish – How publishers “push” (or “publish”) a message to a Cloud Pub/Sub topic. Subscription – How subscribers “subscribe” to a topic, either by pulling messages from the “subscription” or setting up automated “webhooks” (i.e. notifications) to receive these subscriptions. The GC Pub/Sub ProcessThe Publisher drafts a “message” that falls under a specific Topic. This “message” is either stored in Message Storage until it's picked up by one of the subscribers who subscribe to that topic, or gets pushed to the entire pool, or to one or more pools of appropriate subscribers. The recipients acknowledge the receipt of this message. A non-success response means GC Pub/Sub redelivers the message until receipt is confirmed. ExampleNike Inc. sells its products to more than 170 countries. Its microservices include checkout, track activity, and read data. To make sure the apps deliver a good experience and to build Nike's brand, Nike's Albert Perez, director of database and data services, needs a scalable Pub/Sub system where he sends messages to subscribers (namely to his more than 30,000 employees across microservices and to Nike’s stakeholders). The ideal Pub/Sub system would help Nike’s microservices inter-and intra-communicate in a timely manner. The best type of messaging service should also be impervious to hacking, resilient to downtime, and fully automated and smooth.That’s what the Google Cloud Pub/Sub system provides! The Pub-Sub Distribution SystemPublishers have the following three options. One-to-many (fan-out) – Publishers can mail their posts in different subscription boxes that will then reach individual subscribers. Many-to-one (fan-in) – Various publishers can post on the same topic that will then reach one collective pool of subscribers. Many-to-many - Various publishers can post on the same topic to diverse subscribers. ExampleReturning to Nike Inc., Albert Perez could use these different Pub-Sub options to send messages to one or more of Nike’s microservices at the same time. By the same token, one of its microservices––such as the data analytics service––could publish its own update or event ticket that would then get distributed to one or more targeted subscribers who would acknowledge receipt of the data.Subscribers could also access these messages through the Pull process, where they select that particular topic from the Message Storage folder. Once retrieved, the post would be eliminated from Storage. Google Cloud Pub/Sub Pros Fully-managed and highly scalable messaging middleware service – The system scales to meet demand. You can send millions of messages at one time with latencies less than 100 milliseconds, which is considered decent. Provides good support for asynchronous processes – GC Pub/Sub provides an ideal decoupling service, where microservices function separately to the rest of the company enabling for internal control, testing, or management of a single point of failure (for example), while the company continues to function. A dedicated topic for unprocessed messages (dead-letter topic) – The delivery keeps resending until the recipient acknowledges receipt of the message. HIPAA-compliant - GC Pub/Sub’s platform security ensures the privacy, security, and integrity of protected health information. Google Cloud Pub/Sub Cons Message ordering is not guaranteed. GC Pub/Sub may stall on collecting (or sending) extra-bulky messages that could stall the system. Multiple transfer services can be complicated, with some messages becoming orphaned or mistakenly unacknowledged. GC Pub/Sub User-CasesCompanies typically use GC Pub/Sub for the following case-scenarios: Refreshing distributed caches – For example, an application can publish invalidation events to update the object identifications of those objects that have changed. Distributing event notifications – GC Pub/Sub allows you to gather events from many clients simultaneously. Parallel processing and workflows – You can efficiently distribute a large batch of tasks among multiple workers––such as compressing text files, sending email notifications, evaluating AI models, or reformatting images. Real-time event distribution – Events can be made available to multiple applications across your team and organization for real-time processing. Replicating data among databases – n order, for instance, to inform all divisions of updates in real-time or to distribute changed events from databases. Load balancing for reliability – If the service fails in any one area, (for example, if one of your critical employees is missing), the others that have subscribed to this common topic can pick up the load automatically. Data streaming from the Internet of Things (IoT) devices – For example, a residential sensor can stream data to backend servers hosted in the cloud. Bottom LineFor large companies like Nike Inc. that use microservices for agile, fast-moving, and lean communication, Google Cloud Pub/Sub helps them send millions of messages in real-time to various or more of their microservices with the acknowledgment that these messages have been received. GC Pub/Sub is the only Push/Pull option on the market that’s also flexible and smooth. It’s scalable, secure, and reliable, and provides managers with a great option for effective storage, data analytics, and data streaming. Want to learn the basics of GC Pub/Sub and perhaps distribute your own messages? This free Google Cloud Pub/Sub Qwik Start Lab lets you do all this and more… Let’s Connect!Leah Zitter, PhD, has a Masters in Philosophy, Epistemology and Logic and a PhD in Research Psychology.
It’s a dream scenario: choosing your own cloud platform when designing, architecting and building a global cloud enterprise software application. And that’s just where the Fulfilld story begins. Fresh off the launch circuit the SaaS company is breaking the fourth wall and is taking the C2C Google Cloud customer community behind the scenes and along for the ride with their development, engineering to business leadership teams. They’ll candidly share their successes, challenges and your engagement is welcome. This series will be a mix of articles, discussions, on-demand content and even live events where you can bring your questions and comments directly to the teams. To kick off the journey, we begin with understanding who FulFilld is, why they chose to build on Google Cloud and how micro-services are enabling them to quickly deploy features, develop an intelligent enterprise warehouse management platform and support high-volume transactions that can scale globally. First Things First - Why did you choose Google Cloud? Michael Pytel, CTO, shared that he and his team is working to deliver an enterprise-grade application that enables a warehouse digital twin using 5G ultra-wide-band powered devices. From supporting high-volume transactions across the globe, to analytics, to machine learning and natural language processing that powers an industry first warehouse digital assistant, Google Cloud Platform became their go-to platform when looking at functionality, pricing, scalability, performance, and innovation. Listen and Join the Journey In our first conversation, Pytel and C2C cover the following: What key decisions contributed to choosing a cloud platform for the SaaS application FullFilld’s requirements for a globally available application Why they need a combination of in-memory databases (Firebase) and traditional SQL-based database (Cloud SQL) Why they were so focused on leveraging the autoscaling features of Kubernetes for application logic Rather skim? Key questions are shared below along with the full transcript with edits only for clarity. __Michael Pytel, CTO, FullFilld (MP): Fantastic, Sabina. Thank you so much. I've worked in enterprise applications, really all my adult life. So I started as a night operator supporting an earpiece system called pix. Then I supported JD Edwards and PeopleSoft and then SAP, enterprise ERP and spent the last decade there. Now with FullFilld, we're building a brand new company, and at this brand new company, we create the digital twin, which is really just a digital representation of the physical warehouse...so that you can visually look at how do people move in my warehouse, and how inventory moves in my warehouse, that's our secret sauce. That's our thesis as to why we're going to be successful and we're getting a lot of good feedback for the market on our product today. Sabina, C2C (C2C): So tell me a little bit about that feedback, what's resonating? MP: This is where Google Cloud Platform comes into play, a lot of our customers love that it's very low total cost of ownership. You know, there's no server to deploy on premise. Everything is cellular connected, and Wi Fi connected. So we have that backup, if the customer's Wi Fi network in the warehouse goes down, it falls over to 5G connectivity. So it's a really low cost of ownership, really easy and quick to deploy and our application is auto scaling, which I think is another benefit of Google Cloud Platform, meaning our customers don't need to worry about running out of resources, right? As they grow from a 50-person warehouse to 100-person warehouse and then add the fourth, fifth, seventh and 10th warehouse, the application auto scales on Google Cloud Platform with the customers growth. So they never really have to worry about “am I maxing out the server? Are we over utilized? Is the system going to be slow when I add this new product line?” We don't need to think about this. We can think about the business challenges we have. We don't need to think about server capacity, which I think is a big benefit with running on Google Cloud Platform. C2C: Yeah, yeah. So talk to me a little bit about that decision, then to go with Google Cloud. That is, did you build knowing that you would use Google Cloud? Or was that something that came up later? Talk to me about that decision. MP: Yeah, so there's multiple, you know, infrastructure as a service organizations out there, Google, Microsoft, Amazon, you have other providers out there as well, that are smaller, but still very innovative. So we had to find that right mix of brick and mortar stability and investing in new technology, and constantly innovating. When you take a look at Google and their ecosystem, the way that they share knowledge, the way that they share their product roadmap, the way that they create content on YouTube, for developers to watch and articles for us to read. We just felt like, “wow, this is a fantastic organization that's continually innovating, continually pushing the envelope of what they can do and what they can't do.” Also, enabling startups like us to run an enterprise level application and enterprise grade application at the lowest cost possible [is another benefit]. So, it’s cost optimized, super innovative, great content, great partner program, easy for us to learn and ramp up resources. That was part of my personal scorecard when determining what platform to run on. Google Cloud Platform really just checked all those boxes for us. C2C: What are you most excited about with Google Cloud? MP: So personally, you know, their ability to run essentially functions with it for your application and in an auto scaling manner. A lot of cloud providers can run Docker and Kubernetes, so Google has the Google Kubernetes engine and we can run code in the backend on Google Kubernetes engine, and it auto scales. Microsoft can do that, too. And Amazon could do that, too. But then there's this feature in function called Cloud Functions, which even further drives our costs of operating even lower. And they're really innovative. I can use TypeScript to node and Cloud Functions. This is probably getting, you know, super technical... C2C: ...Our community loves technical, go for it, give us the details! MP: Fantastic. So, you know, looking at Cloud Functions, we just loved the way that they worked in the function. We loved how they're cost optimized and when users are logging in and using the application, it can auto-scale and grow. I don't need to really take on the management of Kubernetes and Kubernetes clusters and the management of how many nodes are active, I can just use this thing called Cloud Functions. Another thing within Google Cloud platform that we really loved was Firebase. We use a mobile application in the warehouse, where you can picture yourself as a warehouse worker, and a garage door is open, and there's a truck that you have to unload. A lot of times, you're not unloading that truck, and it has pallets and has different products on it and those products are going to different places, you don't typically unload a truck by yourself, you have a team member and a teammate or a group of people that are going to help unload this truck. So we needed a mobile application that was super responsive, super fast.[For example], I receive 10 baseball gloves that are sitting right in front of me and somebody else grabs another 10 baseball gloves, we need to update each other, letting each other know that we both receive 10, we need to let the backend system know we've got 20 total, and we need that to happen very fast. So we're using Firebase, the cloud-based no SQL data in memory database, we're using Google Flutter to build our mobile application handling authentication there as well. It’s just a very responsive, very fast application because of these cloud technologies. You know, we could have gone a traditional API route with a traditional SQL database, but Firebase has been super responsive for the mobile application making it so that the warehouse worker can just keep working, keep working, keep working and not wait on the application to update the screen. So it's been fantastic so far. C2C: How does that translate into the business outcomes for your customers or your clients? MP: So within the supply chain world, there are tremendous pressures to get products to customers, right, we learned we read about this all through COVID. There's more DTC shipping and more direct-to-consumer.Brawney paper towels, Georgia Pacific, they were so used to shipping whole pallet loads to Sam's and Costcos. Now they have to ship individual piece products directly to consumers. This is happening across the industry, so there's just more things, more movements, more activities, more documents in the warehouse. Anything we can do to make sure that the user in the warehouse is supported is not encumbered or you know, they don't view the system as a bottleneck, they view it as an enabler, that's what makes us look great makes the warehouse worker feel good about their job, meaning they know what to do, they know what they need to do and they can get it done very quickly and they’re waiting for them on the system to process it.So us being very responsive and very quick enables that warehouse worker to do their job effectively throughout the day and enables that organization to do more.That's what we're trying to do is we're trying to make it so the individual warehouse worker can improve their throughput by 50% by navigating them through the warehouse very effectively using Google's Auto ML and machine learning models for routing in the warehouse.Being super responsive, supporting that worker throughout the day, and just enabling organizations to do more. That’s our goal. C2C: [Are you ready to compete with Amazon?] MP: We get this question a lot. Amazon's a massive company and they run a lot of their own software naturally being one of the largest companies on the planet. In North America, there are 40,000 other customers and 40,000 other companies that make a product and need to ship their product to a customer. Our goal is to democratize the technology typically used by large companies and make it available to midsize companies.So being able to create that digital twin of the warehouse, yeah, Amazon already did that. But they have billions and billions of dollars. You know, what about the $200 million manufacturer of equipment in Durango, Colorado or the, you know, the upstart shoe manufacturer, and they're making shoes in the US, and they're selling it directly to consumers? They want really sophisticated warehouse management application, something that's going to help them be super effective as they move product through the warehouse, reduce collisions, reduce the number of times we touch a product, optimize the picking route, the way that people are walking through there, what if they want that technology and they're not a billion dollar company? What do they do? Well, that's where FullFilld comes in, really trying to democratize that large enterprise level of features and bring it down market into those midsize companies. C2C: It's.. It's really cool...and... that's your that's your why, right? That's your big mission, your core every day. Was this a COVID born decision? Or where did this come from? MP: We definitely founded the company in 2020. We started the company during the pandemic. We saw the need there. But there were also some other cool things that were in the mix here from a technology perspective. There's a technology called ultra wideband, which is not specific to Google or anybody else, but ultra wideband is a location indoor positioning technology that enables us to understand where an object is in a physical space.So there was a convergence of the need, meaning the need was COVID and direct consumer was going to continue to grow and that every analyst agrees, it's going to continue to get even bigger. So we knew that logistics and supply chain space was going to have growing pains. We had this new technology that's being adopted more and more and then we have Google Cloud Platform, which enabled us to stitch it all together. Now we're using machine learning in Google Cloud Platform, to make recommendations to customers on where to store products in their warehouse. Because of this location technology, we understand where the product is and because of the application we built, we understand what needs to be moved, what are the orders, what are the sales orders, what needs to be moved to the customer. So blending it all together, they're on Google, it's really cool. C2C: [How did the application begin?] MP: When we started the application, we started with a UX design and beta test with a few customers. We created a website, we created the design of the application, we communicated to the market, what we were doing and what we're building. As we were just in the beginning, just starting to build the application, we got pinged from Deloitte in Europe, and they had found us and one of their large retail customers had taken an interest. They said, “Wow, you guys are doing location tracking indoors to make employees more effective in the way that they move inside of the warehouse?” They thought, well, “could I use this in a retail scenario where I've got large retail? I need to move people around at night, because I want to turn my retail stores into warehouses” So while we have to be able to run in multiple data centers around the globe, that obviously, is something Google's very good at. We needed different features and functions within Google available in European data centers right away, which was fantastic. Google already had a partner that also can help us understand some privacy laws in different countries and Google has a lot of information that showed us, you know, or gave us leads on how to handle different privacy and GDPR compliance within European data centers, which was fantastic. So that was number one, we knew we had to run in multiple data centers, we needed to be multilingual. And again, just tacking on all the little components we needed. We needed an in memory database, we needed an attritional database, we needed to be auto scaling because we wanted to have, you know, really low operational costs, runtime costs. The next thing that we wanted to do was build a world's first natural language digital assistant. So the Siri or Alexa, the Google assistant of the warehouse, meaning I could hold up a device, my Google Pixel, and I could say, “where's my next pick? Where's this material? What's the status of the next delivery? How many more tasks do I have?” Natural language digital assistant on devices in the warehouse specific to my job function and Google offered that as well, the ability to have a natural language digital assistant in multiple languages. So we were able to use their application to build a digital assistant that can speak Spanish and Swedish and English. As we continue to grow, we can continue to add more languages and be more global. So we definitely knew from the beginning, we wanted to be a global company, and GCP has those features to help us do that. C2C: That's, that's awesome. Thank you for sharing that whole story. That's a nice succinct way of how it started and where you are now to set everybody up. One of the other things that we offer to set up the community with is a solid criteria that can be used when determining the right platform for your application. Are there certain key makers or decision points that you can share for others that are evaluating whether or not they want to build on GCP? MP: Yeah, that's a good question. The big thing is the developer community and the developer community support, right? If you're transitioning a developer into a platform, are they going to be able to ramp up quickly on the knowledge required? Are they gonna be able to participate? Are they going to be able to have test environments and demo environments, at a very low cost? So that one thing was just developer community and developer community support. I think Google is very developer friendly and supports developers. The next one was service availability and data center availability. Can I run in all the countries that I need to run?” And Google had that check mark there in terms of innovation, you know, as an organization they have a well thought out roadmap. They clearly communicate to the community what they're building and what they're sunsetting. We need to know, as we're building an application. If we're using a specific technology, what does the roadmap for that technology look like within your company? Is this something you're going to continue forward with? Or is this something you're going to kill and create something new? So as we are betting our futures on different technologies, whether that's Kubernetes or natural language processing, or x, what does that product roadmap look like so I can lay out my product roadmap. I think that Google's doing and has done a very good job of laying out what the roadmaps are in specific use areas. There's always room for improvement, we always want more information, right? I'm never gonna be happy. But that's one thing you need to look at when choosing a cloud provider is, “what does the product roadmaps look like? How far out are they forecasting? And are they meeting the goals that they're setting so that you can plan your product around that company's product roadmap? So product roadmap, developer support, developer adoption, service availability, data center availability, were kind of our top three. C2C: Yeah, thank you so much. Is there anything else that you wanted to ensure people understood about the FullFilld or why you chose Google Cloud? Because my last question, then, if there isn't anything, which I'm sure there probably is, is why you are excited about the C2C community and how you see yourself contributing or being a part of the community? MP: In terms of FullFilld, we want to share not only our product and what our product is, what our vision is with the warehouse management logistics community, we are also eager to share how we're building this platform with the development community.We're eager to share our experiences, talk about it out loud, get feedback, and ask good questions. One thing I've learned in my technical career is the more I share, the more I learn and so we are here fulfilled, we definitely want to share everything that we're doing. We want to share how we're building it, where we're building it, our timelines and the functionality that we're using. We're looking forward to engaging with the C2C community to have those open conversations because we're going to learn something that we didn't know. Someone is going to ask a question of, “why did you do that?” We need to defend it or adapt and move to something else and in. We don't want to build our application in a silo. There's a wonderful community of people out there and specifically within C2C and we want to tap into that community, solicit feedback, solicit ideas and hopefully find some people that want to work for FullFilld in the future. Also, I hope that we are sharing enough information so that as other individuals are out starting their company, building a new platform or building a new application within a larger organization, they can learn from our mistakes, hear about our challenges, and adapt and grow from there. That's, that's what that's really one answer for both questions. C2C: Yeah, that's, that's amazing. Awesome. That's all I got. Do you have other things you wanted to add? MP: No, thank you so much for the opportunity, support, excited to share and really hope that we get lots of great q&a and questions from the community. C2C: Yeah, me too. Me too. I'm really excited to share this out. And so thank you so much for your time, Michael, and I'm sure we'll talk with you soon. MP: Yep, see you soon. The Fulflld Journey to Deployment continues with the following events:
This C2C Deep Dive was led by Nathen Harvey (@nathenharvey), cloud developer advocate at Google who helps the community understand and apply DevOps and SRE practices in the cloud.The Google Cloud DORA team has undertaken a multi-year research program to improve your team’s software delivery and operations performance! In this session, Nathen introduced the program, research findings, and invited Google Cloud customer Aeris to demonstrate the tool in real-time.Participate in the survey for the State of DevOps Report by July 2.The full recording from this session includes:(1:40) Speaker introduction (3:20) How technology drives value and innovation for customer experiences (4:20) Using DORA for data-driven insights and competitive advantages (7:15) Measuring software delivery and operations performance Deployment frequency Lead time for changes Change fail rate Time to restore service (14:20) Live demonstration of the DevOps Quick Check with Karthi Sadasivan of Aeris (23:00) Assessing software delivery performance results Understanding benchmarks from DORA’s research program Scale of low, medium, high, or elite performance Predictive analysis by DORA to improve outcomes (29:30) Using results to improve performance Capabilities across process, technical, measurement, and culture Quick Check’s prioritized list of recommendations (37:40) Transformational leadership for driving performance forward Psychological safety Learning environment Commitment to improvements (41:45) Open Q&A Other Resources:Take the DORA DevOps Quick Check Results from the Aeris software delivery performance assessment Google Cloud DevOps
While cloud computing has come a very long way since the nascent days of Google App Engine, we’re still only beginning to understand the areas in which cloud technology can make the greatest impact in our lives. One industry that recently took steps toward a more technology-based model is healthcare and wellness, with the adoption of data repositories like Google Cloud Healthcare API for storing medical records and telehealth communication to make doctor’s visits and therapy appointments more accessible. Cloud computing has given providers options when it comes to communication, but the growing popularity of wellness and cognitive behavioral therapy (CBT) apps could soon lead to better data visualization and even personalized treatment when it comes to mental health. What Is an Evidence-Based Mental Health App? Even though it seems like there are apps for anxiety and depression of all shapes and sizes on the market today, the number of evidence-based mental health apps is still relatively small. That’s because, in order for an app to be considered evidence-based, it needs to meet certain requirements by the U.S. Food and Drug Administration or have one randomized clinical research study that supports its effectiveness, as reported by PsychCentral. An evidence-based mental health app would be the biggest improvement because it gives the patient and provider confidence in their diagnosis and treatment. This improvement also saves time in unnecessary diagnostic testing and consultations with specialists regarding knowledge gaps, essentially giving everyone a system that consists of tasks. Complexity ranges from reference retrieval to the processing of relative transactions, complex data mining, and rule-driven decision support systems. This gives all users a trusted, scientific embedded system to back up their mental health data. Useful Features of CBT Apps Many evidence-based mental health apps come with a variety of features that individuals can use to manage or track some area of mental health. From setting reminders for taking medication to deprogram negative thought tendencies, mental health and CBT apps come with a variety of features that can help users build awareness around their mental health. CBT apps have features that can improve our daily habits, willpower, and give us a growth system that improves our way of thinking. Not only do they provide a space for tracking systems, but they are also able to hold journaling and notes on the important impacts of daily habits, creating an overall location where the user has the power to destroy bad habits and start healthier new ones. Self-Management and Tracking One evidence-based mental health app called Medisafe allows users to set alerts to remind them when to take medication. While other CBT apps, such as Worry Knot by IntelliCare, actually use cognitive-behavioral principles like “tangled thinking” to teach users how to manage everyday worries and anxiety.Tracking and self-management can help users understand more about thought patterns and side effects from medication, all of which can help patients and doctors find a treatment path and clinical plan that works best for them, including being able to create therapy techniques for our health and build confidence. Apps for anxiety and depression have even created mindfulness techniques that help with meditation, quick mental start programs, and even SOS buttons if the user is feeling the need of urgency. Data and Analytics Users that want to manage their mental health through better sleep and routine exercise can use an app like Whoop to track their respiratory rate and sleep quality. Whoop is not a CBT or mental health app, but its ability to track patterns in sleep and recovery can help users zero in on the behavioral patterns that may be negatively impacting their health and, by extension, their mental health. Personalized Recommendations Other mental health apps, such as Breathe2Relax, equip users with recommendations for breathing exercises to soothe symptoms of PTSD, general anxiety, and more. Calm is an app made to enjoy listening to therapeutic music and has the ability to track and create sleep patterns. We can’t forget Mood Kit, which allows users to create customized journal entries for moods. With mental health apps expanding and becoming more specific, having many personalized touches bring a more manageable, enjoyable, and convenient way to manage our health. Current Limitations of CBT Apps While apps for anxiety and depression have grown in popularity, many apps are still created with little evidentiary support that they work. Having the ability to track and manage health with CPT Apps can seem like users have the world at their hands, but it's important to commit to oneself. We all need accountability, and we sometimes create more personalized relationships with a friend, family member, or provider, making an overall safe space to push forward. So, we still have to show up and put forth the effort. Being more self-reliant, CPT apps may also not be suitable for people with complex mental health needs or learning difficulties. Some critics actually argue that these apps only address current problems and very specific information that not much of the possibility of underlying causes of mental health is given, such as an unhappy home. Participating and involving with CPT apps can create and build more pressure to face fears, but it takes true honesty to involve oneself in things that eventually can change our lives for the better. How Evidence-Based Mental Health Apps May Improve Treatment and Patient Plans While we may still be in the beginning stages of understanding just how cognitive behavioral therapy apps meaningfully fit into the treatment of mental health, there are some early indications that a hybrid treatment plan could provide more mental health services to rural areas to bridge the mental health treatment gap. Treatment plans are a good place to start when wanting to improve one’s mental health. A mental health treatment plan creates teamwork between patient and provider, which can greatly enhance client engagement. Storing data in evidence-based resources makes the treatment for patient plans more trusted and heavily experienced. Goals, milestones, and timelines make it easier to store information. Providers are now able to know where you will go or maybe are headed. Having credible knowledge brings worldwide sources all in a digital space that is accessible and highly specific to the data captured by CPT apps. Evidence-based mental health apps will be a part of our evolving health systems, with lowering costs for healthcare, easy access to update a network of medical data, creating a more customized experience and road map to care for you. Extra Credit: https://psychcentral.com/blog/top-7-evidence-based-mental-health-apps#1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5897664/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7381081/#:~:text=Published%20reviews%20have%20found%20that,including%20substantial%20heterogeneity%20across%20studies. https://riahealth.com/2019/10/15/mental-health-apps/ http://www.thecbtclinic.com/pros-cons-of-cbt-therapy
Look back on Earth Week 2021 with the C2C Community. This panel discussion was hosted between C2C community members to share their companies’ initiatives toward sustainability.L’Oreal kicked off the panel with their tips and tricks on a green cloud. Shared by:Herve Dumas, Group Chief Technology Officer Antoine Castex, GCP Architect Lead22d Consulting told the story of how sustainability was built into the core of their business DNA. Shared by:Dominik Kugelmann, Chief of Vision & Co-founder Marie Touchon, Customer Success ManagerThoughtWorks delivered a short presentation on how development teams can reduce cloud carbon emissions. Shared by:Dan Lewis-Toakley, Green Cloud Lead & Senior Developer Consultant Danielle Erickson, Senior Consultant Developer Links shared by the community:Cloud Carbon Footprint and its Cloud Carbon Footprint Repository Digital Sobriety : A responsible corporate approach Google Cloud Region Picker Lean ICT: Towards Digital Sobriety Why green cloud optimization is profitable for you and the planet (Thoughtworks)
While people worldwide are celebrating Earth Week, it has become essential to take a step back and look at where we are with our responsibility towards the environment and how we can celebrate our planet while making technology more reliable and accessible to everyone. Google has demonstrated a longstanding commitment to climate action and environmental sensitivity, and this fidelity is evident in the company’s sustainability goal to run on carbon-free energy by 2030. However, what does this commitment mean for us, as Google Cloud users? Where does our responsibility lie in this sustainability mission? Google Cloud’s Sustainability Commitment In 2018, Google achieved 12 consecutive years of carbon neutrality and, for the second year in a row, matched 100% of the electricity consumption of our global operations with renewable energy. Then, it announced the goal to power operations with carbon-free energy, 24x7, 365 days a year. Currently, Google Cloud data centers use a blend of renewable and nonrenewable energy. Carbon emissions, for example, the release of carbon dioxide (CO2) and methane (CH4) into the atmosphere, are part and parcel of performing any kind of activity at the data center. However, Google Cloud is helping customers like us understand this information in a consumable format, so we can check the environmental footprint of the data centers in which we host their applications. What does this information mean for us, and how can we help make our applications friendlier to Earth while working on an optimum performance? Green Data Centers Google data centers are twice as energy-efficient as typical data centers, and they’re sharing performance data with their users to help businesses get greener. Using the Google data centers efficiency tool, teams can measure and improve energy use, assess the performance report by year and quarter.Apart from this, Google Cloud uses a healthier, greener supply chain to encourage recycling, reuse, and make more thoughtful use of our planet’s resources. However, since sustainability is a joint effort by the cloud provider and user, it is essential to understand what resources we have at our disposal to make choices that work best for our applications and the planet. Google Cloud uses the term “green data center” to indicate a sustainable data center: a service facility that utilizes energy-efficient technologies. Knowing the list of green or energy-efficient data centers can help us make the right switch for an environmentally friendly experience. Google Cloud has released carbon-friendly energy scores for Google Cloud regions. This information can help us choose locations that are energy efficient and have lower carbon emissions. Making the Right Choice for Your Applications The region picker tool can help us assess scores, and we can slide the bars in front of different metrics to shortlist the regions best optimized for our use. The snapshot below demonstrates an example of the energy score for Oregon, USA (us-west1). There are icons on the right-hand side of the region name which denote how energy efficient the region is and how expensive it is for our applications. Apart from these icons, there are three more metrics displayed for our consideration. Here’s a breakdown of what these terms stand for: Carbon-Free Energy A Google Cloud region’s carbon-free energy percentage describes how much energy comes from renewable sources and what part is nonrenewable. In our example, 89% of the power comes from renewable sources, making it an excellent, sustainable option to run our applications. Grid Carbon Intensity A Google Cloud region’s carbon intensity refers to the number of grams of carbon dioxide (CO2) that it takes to make one unit of electricity a kilowatt per hour (kW/hour). Simply put, a region with lower carbon intensity is a more sustainable choice to run our applications. Google has released this information since we may encounter situations where more than one region might receive an equal portion of its energy from renewable resources. This information can help us make a better choice for Earth. In our example, we have a low grid carbon intensity, which makes it a good choice. Google Compute Engine Price While we run our applications on Google Cloud, we must also factor in the price along with a good sustainability model. Google helps us with this metric to make a good choice that works best for us, our users, and the environment. AI for Sustainability AI has the potential to optimize the working of various applications, conserve resources by detecting energy emission reductions, remove/reduce CO2, help develop greener supply chain networks, and make the best use of available resources to avoid wastage. To avoid reinventing the wheel, you can refer to our outstanding contributor Leah Zitter’s article on the impact of AI on energy efficiency in the Extra Credits section. Google’s DeepMind has reduced the electricity for cooling Google’s data centers by 30% and created learning systems to optimize Android battery performance. For the visual learners, here’s an infographic outlining how we can use AI in our goal towards a sustainable environment: Extra Credit Google Cloud Sustainability Details Google Environmental Report Google Cloud Region Picker Tool Google Cloud Efficiency and Sustainability Report Google Cloud AI Adoption Framework Impact of AI on Energy Efficiency
With the growing adoption of Google Cloud technologies, knowledge of security has gained paramount importance over the years. It is crucial to understand the technologies, policies, processes, and controls to secure Google Cloud Platform applications. Cloud technologies and security go hand in hand, as cybersecurity threats can invade your applications and affect your business’s confidentiality, integrity, and availability. Security is a shared responsibility of the application owner and cloud provider, and it’s essential to understand how to build a robust security model. We have listed 10 security best practices to help keep your cloud environment secure. Understand Your Cloud Locations and Services Understanding your cloud locations and services is a critical best practice to keep your applications secure. Google Cloud services and products are built on top of the core infrastructure, which has in-built security features like access control, segmentation, and data control. However, you need to know how your data is stored, encrypted, and managed to ensure your information is secure. Google Cloud has VPCs, or Virtual Private Cloud, which is an on-demand pool of shared resources. VPC are isolated from each other and can talk through VPC peering. You can control all network ingress, inbound, and egress outbound traffic to any resource via simple firewall rules. When designing a robust security model, the first step is knowing how your applications are hosted and what Google provides all security services and products.Google’s data loss prevention API helps you discover, classify, and protect your sensitive data. It’s a fully managed service that inspects your structured and unstructured data, helping you gain insight and reduce any risk to your data applications. Understand Your External and Internal Security Threats Understanding and being aware of your internal and external threats can help you stay proactive and keep your applications secure. Hazards can be present anywhere, and it’s useful to understand the STRIDE Threat Model to keep on top of all the threats your applications can face on Google Cloud.STRIDE stands for spoofing, tampering, repudiation, denial of service, and elevation of privilege. The infographic below explains each of these threats. Google Cloud Armor helps protect your applications against denial of services and has built-in security against L3 and L4 DDoS attacks. Leveraging this for your applications on Google Cloud can help provide an additional security layer against any of the threats outlined in the STRIDE model. Identity and Access Management Control IAM is a framework of policies and processes defined by the cloud provider to make sure users have appropriate permissions to access resources, applications, and data on the cloud. IAM helps secure the data, prevent unwanted threats, and ensure all the users have the right amount of access to get their work done. Google Cloud Platform has many services and products to protect users and applications by understanding, managing, and controlling access.All resources on Google Cloud are managed hierarchically and are grouped into four parts- organization, folders, projects, and resources. For example, a company using Google Cloud is the top node, followed by folder, project, and resources. Each resource has only one parent, and children inherit the policies of their parents. So, by default, policies set at the organization node are inherited by all the folders, projects, and resources under that organization. Resource Manager lets you centrally manage these resources by projects, folder, and organization. A fundamental way to filter out unwanted users is to set up a robust authentication framework, which gives access only to the users who can validate their identity. Google Authenticator lets you do that without having to put in any extra effort. However, cloud Identity provides additional solutions to secure your account, device, and workspace with advanced protection and password-vaulted applications. You can choose from various solutions like Single Sign-on (one-click access to applications), multi-factor authentication (using two or more devices to validate identity), and endpoint management. To guard access to your applications, you can use Identity-Aware Proxy. You can verify who is trying to access your application and grant access accordingly. This move helps implement a zero-trust model, along with centralized access control. IAP can protect access to applications hosted on Google Cloud, any other cloud, or even on-premise infrastructure. Here are some of the IAM best practices that you can follow to keep the data in your applications secure. Active Monitoring Actively monitoring your environment and application helps discover potential intruders who may be lurking around and targeting your applications’ data. Knowing who is accessing your data and monitoring any suspicious activity can help you stay proactive and keep your applications secure. Google Cloud Monitoring, formerly known as Stackdriver Monitoring, helps monitor, troubleshoot, and improve your applications’ performance on Google Cloud. It’s a fully managed, scalable service that provides easy-to-view and access dashboards with several performance indicators and notifications/alerts. Understand the Shared Responsibility Model Google Cloud Platform provides various services ranging from highly managed (Function as a Service) to highly customizable (Infrastructure as a Service). Each service comes with its security responsibility model. The following diagram shows Google’s Compute offerings, which you can use to run your applications. Knowing and understanding these services would act as a stepping stone to design the shared responsibility model. Like Cloud Functions or Firebase, highly managed offerings have more built-in security than highly customizable offerings that provide more flexibility to the users. The following diagram illustrates the shared security model based on the type of service offering to run your applications. Keep Your Data Encrypted When all data is converted into a secret code or encrypted, the information’s true meaning is hidden. Encryption ensures that the data is not accessible by anyone other than the ones allowed to access it.Google Cloud Platform encrypts data at rest by default, which means it encrypts the data stored by you with no additional action required. Data is encrypted before the application writes it to your disk. A set of master keys encrypt each key and applies to almost all data you have on the cloud. If you have more sensitive data, you can manage your encryption key. For this, you have customer-supplied, and customer-managed keys. The below image compares these two options to help you make the right choice. Thorough Vulnerability and Penetration Testing This complicated term means putting on the hat of the attacker and thinking like one. By this method, the organizations or the cloud service providers attack their infrastructure to test the stability and discover vulnerabilities allowing them to catch and fix vulnerabilities before any outsider can find them. Google Cloud Platform provides a Web Security Scanner as a part of the Security Command Center to detect critical vulnerabilities in your applications, even before its deployment. It identifies vulnerabilities in your App Engine, Kubernetes Engine, and Compute Engine instances and lets you stay ahead in the security game. Establish and Manage Firewalls A firewall is simply a wall or barrier attached to the system to prevent intruders from getting inside. In cloud computing, they are rules attached to systems to block unauthorized access while allowing outward communication.Setting security rules on incoming and outgoing traffic would help establish a barrier between the intruders and the system by filtering traffic inside and blocking outsiders from gaining unwanted access to the data.To allow or deny connections from your virtual machine (VM), you can apply firewall rules in your Virtual Private Cloud (VPC). Within the configuration, you can set, identify and enforce VPC firewall rules allowing you to protect your applications regardless of their configuration and operating system, even if they have not started up. Manage and Institute Cloud Security Guidelines Instituting and managing security best practices and guidelines for the organization is essential to ensure your applications’ safety. It’s necessary to streamline processes to ensure the staff, stakeholders, partners, and leadership are on the same page. Google Cloud has many security partner products you can leverage for all your security needs. Apart from that, they have several infrastructures, data protection, logging, and compliance partners who can guide you and your organization to formulate the best guidelines for your applications. To secure your applications and scan non-compliance resources in your infrastructure, you can leverage open-source tools like Forseti and Config Validator.Here’s a snapshot of some of the partners who can guide you in your security needs on Google Cloud. You can view the complete list under the resources section of this article. Train Your Staff The last but critical best practice is to keep your staff up to date on security threats and best practices. Any security measure is of no use if the organization does not follow it. It’s of paramount importance to ensure everyone is aware of security threats and follow the organization’s best practices instituted. Google Cloud provides training, whitepapers, articles, and support to ensure compliance with all the industry standards to keep your applications secure. Visual Learner? Resource for You. Extra CreditHere are some resources that you can use to understand cloud security better and design a robust security framework for your applications on the Google Cloud Platform: Coursera Professional Certificate on Google Cloud Platform Security Google Cloud Platform Security Best Practices Repository Google Data Loss Prevention API Documentation Google Cloud Virtual Private Cloud (VPC) Documentation Forseti and Config Validator Google Cloud Platform Documentation Google Cloud Platform Security Partners Google Cloud Web Security Scanner Documentation Google Cloud Monitoring Documentation Cloud Identity-Aware Proxy Documentation Cloud Identity Documentation Resource Manager Documentation Google Encryption Documentation Google Cloud Armor Documentation
Pali Bhat is in charge of Google’s application modernization and developer solutions portfolio and in October, he and Sean Chinksi, Chief Customer Officer, discussed Anthos, Google Kubernetes Engine (GKE), and various other hot-topic issues.“As you think about your applications, you’ll see they’re the heart of your business and how you serve customers,” Bhat said. “They will become more germane and central to everything that your business does. And so, it's really important to have a platform that empowers all of your technology and application development teams to be proactive and to not have to worry about infrastructure, while still being secure and compliant and meeting the needs of your business.”Watch the whole conversation below.Did you catch his answer to our favorite question at C2C, “Imagine Google’s Product and Design portfolio is a 10-episode Netflix series. What episode are we on?” Share it below!
Whether you’re an experienced coder or an app development novice, software packages like Kubernetes and Docker Swarm are two great tools that can help streamline virtualization methods and container deployment. As you search for an orchestration tool, you will come across two common platforms: Kubernetes and Docker Swarm. Docker dominates the containerization world, and Kubernetes has become the de-facto standard for automating deployments, monitoring your container environment, scaling your environment, and deploying containers across nodes and clusters. When comparing Docker with Kubernetes, the main difference is that Docker is a containerization technology used to host applications. It can be used without Kubernetes or with Docker Swarm as an alternative to Kubernetes.While both architectures are massively popular in the world of container orchestration, they have some notable differences that are important to understand before choosing one over the other. Today, we’re discussing Kubernetes vs. Docker Swarm’s different containerization capabilities to help teams and engineers choose the exemplary architecture for their app development purposes. What Is an App Container? To fully understand the differences between Docker and Kubernetes, it’s essential to understand what is an app container. In software development, a container is a technology that hosts applications. They can be deployed on virtual machines, physical servers, or on a local machine. They use fewer resources than a virtual machine and interface directly with the operating system kernel rather than via hypervisor in a traditional virtual machine environment, making containers a more lightweight, faster solution for hosting applications. Application containers allow apps to run simultaneously without the need for multiple virtual machines in traditional environments, freeing up infrastructure storage space and improving memory efficiency.Many large tech companies have switched to a containerized environment because it’s faster and easier to deploy than virtual machines. Container technology runs on any operating system, and it can be pooled together to improve performance. What Are Kubernetes and Docker? Kubernetes and Docker Swarm are two popular container orchestration platforms designed to improve app development efficiency and usability. Both Kubernetes and Docker Swarm bundle app dependencies like code, runtime, and system settings together into packages that ultimately allow apps to run more efficiently.Kubernetes is an open-source container deployment platform created by Google. The project first began in 2014, while Docker Swarm was invented one year earlier by Linux in 2013 to improve app development’s scalability and flexibility. Still, both projects come with different architectural components with different app development capabilities that fuel the Kubernetes vs. Docker Swarm debate.Kubernetes Architecture ComponentsA critical difference between Kubernetes and Docker Swarm exists in the infrastructures of the two platforms. Kubernetes architecture components, for instance, are modular; The platform places containers into groups and distributes load among containers, alleviating the need to run applications in the cloud. This is different from Docker in that the Docker Swarm architecture utilizes clusters of virtual machines running Docker software for containerization deployment. Another main difference between the two platforms is that Kubernetes itself can run on a cluster. Clusters are several nodes (e.g., virtual machines or servers) that work together to run an application. It’s an enterprise solution necessary for performance and monitoring across multiple containers.ScalabilityAnother difference between Kubernetes and Docker Swarm is scalability. Should you decide to work with other container services, Kubernetes will work with any solution allowing you to scale into different platforms. Considered an enterprise solution, it will run on clusters where you can add nodes as needed when additional resources are required.DeploymentDocker Swarm is specific to Docker containers deploying without any additional installation on nodes. With Kubernetes, however, a container runtime is necessary for it to work directly with Docker containers. Kubernetes uses container APIs with YAML to communicate with containers and configure them. Load BalancingLoad balancing is built into Kubernetes. Kubernetes deploys pods, which comprise one or several containers. Containers are deployed across a cluster, and the Kubernetes service performs load balancing on incoming traffic.Docker Swarm Architecture ComponentsDocker Swarm architecture has a different approach for creating clusters for container orchestration. Unlike Kubernetes that uses app containers to distribute the load, Docker Swarm consists of virtual machines hosting containers and distributing them.ScalabilityDocker Swarm is specific to Docker containers. It will scale well with Docker and deploy faster than Kubernetes, but you are limited to Docker technology. Consider this limitation when you choose Docker Swarm vs. Kubernetes.DeploymentWhile the Docker Swarm architecture allows for much faster, ad-hoc deployments when compared to Kubernetes, Docker Swarm has more limited deployment configuration options, so these limitations should be researched to ensure that it will not affect your deployment strategies.Load BalancingThe DNS element in Docker Swarm handles incoming requests and distributes traffic among containers. Developers can configure load balancing ports to determine the services that run on containers to control incoming traffic distribution. Difference Between Docker and Kubernetes To recap, while Kubernetes and Docker Swarm have many similar capabilities, they also differ significantly in their scalability, deployment capabilities, and load balancing. Kubernetes vs. Docker Swarm ultimately comes down to an individual developer or team’s need to scale or streamline aspects of their containerization deployment, whether those processes would be better suited to a platform capable of speedy deployments like Docker Swarm or flexibility and load balancing like Kubernetes.When to Use KubernetesGoogle developed Kubernetes for deployments that require more flexibility in configurations using YAML. Because Kubernetes is so popular among developers, it’s also a good choice for people who need plenty of support with setup and configurations. Another good reason to choose Kubernetes is if you decide to run on Google Cloud Platform because the technology is effortlessly configurable and works with Google technology.Kubernetes is an enterprise solution, so its flexibility comes with additional complexities, making it more challenging to deploy. However, once you overcome the challenge of learning the environment, you have more flexibility to execute your orchestration.When to Use Docker SwarmBecause Docker Swarm was built directly for Docker containers, it’s beneficial for developers learning containerized environments and orchestration automation. Docker Swarm is easier to deploy, so that it can be more beneficial for smaller development environments. For small development teams that prefer simplicity, Docker Swarm requires fewer resources and overhead. Extra Credit https://searchitoperations.techtarget.com/definition/application-containerization-app-containerization https://www.docker.com/resources/what-container#:~:text=A%20container%20is%20a%20standard,one%20computing%20environment%20to%20another.&text=Available%20for%20both%20Linux%20and,same%2C%20regardless%20of%20the%20infrastructure. https://www.sumologic.com/glossary/docker-swarm/#:~:text=A%20Docker%20Swarm%20is%20a,join%20together%20in%20a%20cluster.&text=The%20activities%20of%20the%20cluster,are%20referred%20to%20as%20nodes. https://thenewstack.io/kubernetes-vs-docker-swarm-whats-the-difference/ https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
This was written and crafted by Vijeta Pai. Connect with her on the platform, @Vijeta90 There’s a lot of buzz around low code vs. no code in the cloud-computing world, and it’s possible to confuse these two. Both of these approaches are very user-friendly, intuitive, and operating as a boon for non-coders, beginners, and experts who wish to build their applications on the cloud. However, there’s a lot of ambiguity around what is low code and what is no code, or the overall usage of these terms, since they appear to overlap at certain places. Low-code vs. no-code app development platforms come with their unique advantages and disadvantages. It’s helpful to learn about the unique features, differences, similarities, and use cases to determine which one is best for you. Let’s read ahead and find out which software would provide the best solution to keep your team agile throughout the development process. What Is Low-Code App Development? Low code is an app development practice that allows developers to drag and drop blocks of existing code into a workflow to design applications in the cloud. This visual development style is quickly becoming a popular coding option among digital enterprises. Gartner predicts low-code app development will be responsible for 65% of application development by 2024. This kind of app development is very similar to building software using programming language, with added advantages of having predefined workflows and shortcuts. Rather than writing the code from scratch, learning the programming framework with each new upgrade, or writing several tests before the code’s actual line, you can easily create something productive. In the debate of low code vs. no code, low-code development comes with many advantages. Among them are scalability and speed. Low-code development allows digital enterprises to automate the development life cycle steps, increasing production speeds and cutting down on development backlogs.Additionally, low-code app development is fast, highly scalable, has resource flexibility, offering teams the flexibility to work with complex architecture without spending a lot of effort writing the code from scratch, duplicating work, or reinventing the wheel. What Is a Low-Code Platform? A low-code development platform (LCPD) is a development environment that uses a graphical interface to visually program application software. Low-code platforms use a variety of agile development tools that accelerate digital innovation.In the discussion of low code vs. no code, it’s useful to understand the advantages of using the low-code platform. Along with the benefits of security, speed, and scalability, these platforms also have low-risk ROI since they come with robust security measures, data integration, and customizable support, reducing business risks. However, the most enticing part of a low-code platform is the ability to deploy applications with a single click and send them to production without any hassles. Low-code development platforms are great for enterprises of all sizes. Still, many teams choose to opt for low-code platforms’ security and governance benefits when choosing between low-code vs. no-code platforms. Best Low-Code App Builders One of the most significant benefits of low-code vs. no-code app development is versatility. The best low-code app builders on the market should create high levels of cross-functionality across teams. Below, check out some of the popular low-code app builders on the market: Gartner OutSystems Mendix Appian Google App Maker Salesforce App Cloud What Is No-Code App Development? To find the best app development option for their team, digital enterprises should also examine no-code app development options when comparing low-code vs. no-code app development. No-code app development is also a visual software design method that allows “citizen developers” to drag and drop code into a framework. Despite sounding similar to low-code app development, some subtle differences exist based on use cases and users. Unlike low-code, no-code app development allows nontechnical enterprises and people with no coding knowledge to create apps in the cloud quickly. The best no-code app builder will allow for even higher levels of versatility across teams; those working in HR and admin should feel just as comfortable working in no-code app development spaces as those in IT.Benefits of no-code app development: Versatility Cost-effective Very easy to use Doesn’t require extensive training for usage Provides organizations with the ability to address a business need without shifting their attention away from other mission-critical goals What Is a No-Code Platform? A no-code development platform (NCDP) is an app development environment that uses a graphic interface and no code or a programming language to develop applications. Unlike low-code platforms that require some level of programming knowledge, the best no-code app builders use an interface that is simple to take in-house to protect sensitive internal documents like vacation requests, payroll, and more.No-code platforms place app development control in internal teams’ hands, eliminating the need to hire third-party app developers or bring additional members onto a team to streamline and even automate internal processes. Best No-Code App Builders If you’re looking for a simple solution to bring app development in-house, we’ve listed down some of the best no-code app builders available to digital-native and enterprise organizations: Kissflow Airtable Nintex Process Platform AppSheet Stackby Low Code Vs. No Code: Choosing the Right Solution for Your Team To simplify the conversation, we’ve broken down the differences between low code vs. no code in the form of an infographic. In Conclusion: Low-code tools are designed for those "fluent" in coding languages. Low-code platforms eliminate the monotonous tasks for skilled developers, allowing the reuse of foundational code through drag-and-drop functionalities. No-code tools are for individuals outside of the developer space, with little to no programming experience. No-code platforms develop all the tools needed to create an application and allow the users to create their unique app using prebuilt modules. No code is like a Lego set, with the platform providing intuitive pieces assembled into anything with little to no experience, while low code is a more advanced model set, requiring a background of skill for a more detailed and tailored product.
C2C Deep Dives invite members of the community to bring their questions directly to presenters.Google Cloud Run is quickly becoming one of the most engaging, albeit challenging, products to date. In this session, Wietse Venema (@wietsevenema), software engineer, trainer, and author of Building Serverless Applications with Google Cloud Run, provided a high-level understanding of what Google Cloud Run is, what the developer workflow looks like, and how to position it next to other compute products on Google Cloud.Explore the demo here.
Vijeta Pai, a Google Cloud expert, and technology leader demystifies cloud using illustrations, comics, and easy-to-understand explanations. Today, we're bringing you her post about Identity Access Management (IAM). What is IAM? Simply put, it's a framework of policies and processes defined by the Cloud Provider to make sure users have appropriate permissions to access resources, applications, and data on the Cloud. This helps not only secure the data and prevent unwanted threats but also makes sure all the users have the right amount of access to get their work done.There are three main parts to Identity Access Management (IAM) in Google Cloud Platform (GCP). They are Members, Roles, and Policies. You can read more about them on Pai's website, Cloud Demystified. Visual learner? Check out the comic Best Practices On her blog, you'll also find some of the best practices that Google Cloud suggests for IAM, but here is a highlight. Get Connected Keep up with her on the C2C community platform (join here!). Extra Credit Google Cloud IAM DocumentationCloud IAM on QwiklabsIdentity and Access Management (Coursera)
Priyanka Vergadia, a developer advocate at Google, has created more than 300 videos, articles, podcasts, courses, and tutorials to help developers learn Google Cloud fundamentals, solve their challenges, and pass certifications. Or, in other words, she's your go-to Cloud Girl.Vergadia will be sharing her excellent content with the C2C community, and we're excited to embrace her creative solutions to complicated tech. Our first post from Vergadia is about where to run your systems. Have you ever wondered how a tech stack would come together? Take a look at the sketch, and feel free to share your questions on our C2C Community platform (join here) or with Vergadia on Twitter!Want to know more about who Vergadia is and why she started #GCPSketchnotes? A profile featuring Cloud Girl will be coming soon!
Originally published on December 4, 2020.In this C2C Deep Dive, product expert Richard Seroter aimed to build the foundations of understanding with live Q&A. Here’s what you need to know:What is Anthos? In its simplest form, Anthos is a managed platform that extends Google Cloud services and engineering practices to your environments so you can modernize apps faster and establish operational consistency across platforms. Why GCP and Anthos for app modernization? Responding to an industry shift and need, Google Anthos “allows you to bring your computing closer to your data,” Seroter said.So if data centers are “centers of data,” it's so helpful to have access to that data in an open, straightforward, portable way and to be able to do that at scale and consistently. Hear Seroter explain how this can help you consolidate your workloads. First-generation vs. second-generation cloud-native companies: What have we learned? The first generation was all about infrastructure automation and continuous delivery (CD) mindset at a time when there wasn’t much research into how to make it happen. So some challenges included configuration management, dealing with multi-platforms, or dealing with security.Now, as Richard Seroter explains more in this clip, the second generation is taking what has been learned and building upon it for sustainable scaling with a focus on applications. Is Unified Hub Management possible through Anthos for the new generation? Yep. Anthos offers a single-management experience, so you can manage every Anthos cluster in one place, you can see what they’re doing, but you can push policy back to them, too. You can apply configurations and more to make it easy for billing and management experience. Serverless anywhere? You bet. Use Cloud Run for Anthos. Building upon the first generation of the platform as a service (PaaS), GCP brings Cloud Run for Anthos as a solution to needing more flexibility and building on a modern stack. Besides being Richard Seroter’s favorite, it balances the three vital paradigms existing today: PaaS, Infrastructure as a service (IaaS), and container as a service (CaaS).Watch the clip to hear Seroter explain the how and the why. What about a GitOps workflow and automation—is scaling possible? Yes, by using Anthos Configuration Management (ACM), policy and configuration are possible at scale. You can manage all cloud infrastructure, not just Kubernetes apps and clusters, and even run end-to-end audits and peer review. Watch to learn how this works. Question from the community: Are capabilities Hybrid AI and BigQuery available for Anthos on-prem?With Hybrid AI for Anthos, Google offers AI/ML training and inferencing capabilities with a single click. Google Anthos also allows for custom AI model training and MLOps lifecycle management using virtually any deep-learning framework. Prefer to watch the whole C2C Deep Dive on Application Development with Anthos?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.OK
Sorry, our virus scanner detected that this file isn't safe to download.OK