top of page
  • Writer's pictureAirspot Tech

Serverless Container Glue App Ecosystem

The concept of "serverless container" combines the benefits of serverless computing and containerization technology. It refers to a deployment model where containers package and run applications, and the underlying infrastructure is managed automatically by a serverless platform.

1. Definitions and Benefits

Combining containerization and serverless computing you can provide several benefits to software development inside any organization. Containerization offers a lightweight and isolated runtime environment, ensuring consistent and portable deployments. Serverless computing abstracts away infrastructure management, enabling developers to focus on writing code without server provisioning or management. By combining the agility of containers and automatic scaling of serverless computing, serverless containers provide increased agility and efficient resource utilization. Operations are simplified, allowing developers to concentrate on application development without worrying about infrastructure tasks. Additionally, serverless containers offer enhanced scalability, automatically handling workload demands without manual intervention.

By leveraging serverless containers, developers can focus on building and deploying applications without the burden of managing underlying infrastructure. They can take advantage of containerization benefits, such as consistency, portability, and efficient resource utilization, while benefiting from automatic scaling, simplified operations, and pay-per-use pricing offered by serverless platforms.

2. Knative serverless containers

Serverless containers with Knative provide a powerful platform for deploying and running containerized applications in a serverless manner. Knative is an open-source project that extends Kubernetes to enable seamless orchestration and scaling of serverless workloads.

You can build and deploy your containerized applications without worrying about the underlying infrastructure with the three basic Knative components, including Build, Eventing, and Serving.

Cloud Run is based on Knative Serving, which is the component of Knative responsible for managing the deployment and scaling of serverless containers. When you deploy an application on Cloud Run, it leverages Knative Serving to handle the scaling and routing of requests to your container instances.

3. Cloud Run

On Google Cloud, the concept of serverless containers is realized through services like Cloud Run and Cloud Run for Anthos. Cloud Run is a fully managed serverless execution environment for containerized applications. It allows you to run stateless HTTP-driven containers without the need to manage the underlying infrastructure. With Cloud Run, you can focus on writing code and let Google Cloud handle the operational aspects of scaling, patching and managing the infrastructure.

By using Cloud Run, you can take advantage of the serverless container capabilities provided by Knative without having to manage the Knative infrastructure yourself. Cloud Run abstracts away the complexities of Knative, allowing you to focus on building and deploying your applications.

Using CloudRun on Google Cloud you can enjoy an easy integration process. Serverless containers on Google Cloud seamlessly integrate with other services like Cloud Pub/Sub, Cloud Storage and Cloud Logging, enabling a unified and comprehensive application architecture. Moreover, you can compute all needed resources thanks to its pay-per-use pricing.

By leveraging serverless containers on Google Cloud, you can deploy and run your containerized applications with ease, scalability, and cost efficiency. Whether you choose Cloud Run for fully managed serverless capabilities or Cloud Run for Anthos for hybrid and multi-cloud deployments, Google Cloud provides the infrastructure and services to support your serverless container needs.

CloudRun offers several advantages in the context of platform engineering. Some reasons why serverless containers are considered better in certain scenarios are listed in the following table.

Cloud Run - Table of Benefits

Key BenefitsDescriptionIncreased Flexibility- Serverless containers provide greater flexibility in application design and development. - Packaging and deploying full-fledged applications become easier, offering more control over runtime environment, dependencies, and configurations. - Enables building and deploying complex applications with specific runtime requirements or custom infrastructure needs.Resource Efficiency- Serverless containers are more resource-efficient compared to traditional infrastructure or virtual machine-based solutions. - Precise scaling of containerized applications based on demand, automatically allocating and deallocating resources as needed. - Ensures optimal resource utilization and cost efficiency by scaling only when necessary, avoiding overprovisioning of resources.Simplified Operations- Serverless containers abstract away many underlying infrastructure management tasks. - Allows platform engineers to focus more on application development and less on operational overhead. - Managed container platforms, such as Google Cloud's Cloud Run, handle provisioning, scaling, monitoring and fault tolerance automatically. - Simplifies operations, frees up engineering resources and enables teams to concentrate on delivering value through application development and innovation.

In our approach serverless, containers and event-driven-architectures are the perfect solution for having a logic glue that allows you to create an application ecosystem by seamlessly integrating different applications and services together. With a platform engineering perspective, this approach enables you to build a cohesive and scalable ecosystem that leverages the strengths of each component.

4. Pub/Sub

We will implement our glue logic according to the message distribution mechanism known as Pub/Sub. Pub/Sub, which stands for Publish/Subscribe, is an asynchronous messaging pattern frequently utilized in distributed systems and event-driven architectures. Within the Pub/Sub pattern, messages are transmitted by publishers (or producers) to channels (also referred to as topics) without knowledge of the specific recipients, and subscribers (or consumers) receive the messages by expressing interest in a particular channel. The key benefit of the Pub/Sub pattern lies in the decoupling of publishers and subscribers, rendering it well-suited for implementation in an Event Sourcing application. Pub/Sub offers remarkable scalability and availability, making it an excellent option for large-scale applications.

5. The Five Golden Steps in the Applicative Ecosystem

We are ready to create an applicative ecosystem combining Cloud Run and Pub/Sub. We will follow five golden bullet points:

  1. integration,

  2. event-driven architecture,

  3. company-specific logic,

  4. scalability/resilience,

  5. seamless integration.

As for the integration of applications, Cloud Run acts as the glue, deploying each application as a separate service within a container. Pub/Sub enables easy connection between services. Within an event-driven architecture based on Pub/Sub, services can publish events to topics, and others can subscribe to react, ensuring seamless communication and coordination. Then, the company-specific logic processes events, transforms data, applies business rules, and triggers actions based on predefined logics for flexibility and extensibility.

Cloud Run's scaling and Pub/Sub's messaging ensure reliable delivery and fault tolerance in scalability and resilience. Last but not least, seamless integration with services like BigQuery, Vertex AI, Firestore, provides additional capabilities for machine learning, storage, analytics, and collaboration.

The Applicative Ecosystem - Table of capabilities

Key PointsDescriptionIntegration of ApplicationsCloud Run is the glue for integrating different applications within the ecosystem.Event-Driven ArchitecturePub/Sub enables the establishment of an event-driven architecture.Company-Specific LogicCloud Run allows processing events, data transformations, business rules and actions based on predefined logic.Scalability and ResilienceCloud Run's automatic scaling and Pub/Sub's asynchronous messaging ensure scalability, reliable message delivery and fault tolerance.Seamless Integration with Google Cloud ServicesCloud Run and Pub/Sub seamlessly integrate with various Google Cloud services, providing additional capabilities for machine learning, data storage, analytics and collaboration.

6. The Application Ecosystem

We can now list all the needed steps to develop a reference model approach to build an application ecosystem on Google Cloud.

This event-driven ecosystem involves identifying events, defining event publishers and handlers, emitting events from components, orchestrating workflows and connecting with external services. Cloud Run serves as a powerful platform for implementing event-driven microservices and integrating with various components in your ecosystem.

We can list six steps:

  1. identify events,

  2. define event publishers,

  3. implement event handlers,

  4. emit events,

  5. orchestrate workflows,

  6. connect with external services.

The above list needs some more words to describe. Identifying the events determines the key events occurring within the ecosystem. Events can include data changes, user actions, or external triggers. Defining event publishers identifies services or systems acting as event publishers, responsible for generating and emitting events. Implementing event handlers creates Cloud Run services as event handlers to receive and process events, with each handler performing specific tasks or workflows. The Emitting events hook or trigger in each component publish events to Cloud Run event handlers, integrating them into the ecosystem. Orchestrating workflows with orchestration tools manages the event flow and sequence across services, defining complex workflows and dependencies. Connect with external services, finally, leverages Cloud Run to connect with external services through APIs, handling requests, authentication, and data processing.

Application Ecosystem: the Code

We will use Flask in the code. Flask is a popular choice for building web applications and APIs. It provides essential features for web development, such as routing, request handling and templating. It follows the "micro" philosophy, focusing on simplicity and minimalism, allowing developers more control over the application's structure and functionality.

We will now pass through all six development steps, giving details on more options (legacy, SAP and FireStore).

  1. Identify Events. Determine the key events that occur within your ecosystem. These events can be data changes, user actions, or external triggers. For example, events could include data updates in BigQuery, user interactions in the mobile application, or API calls from external systems.

  1. Define Event Publishers. Identify the services or systems that will act as event publishers, responsible for generating and emitting events. In your case, event publishers could be the legacy custom software running on Compute Engine, SAP on Google Cloud or external SaaS CRM systems like Salesforce.

  1. Implement Event Handlers. Create Cloud Run services that act as event handlers to receive and process the events. Each event handler can be responsible for specific tasks or workflows. For example, you can have an event handler that triggers machine learning models for data analytics, another event handler that performs data transformations and updates in BigQuery, and so on. Cloud Run provides a scalable and serverless environment to run these event-driven microservices.

from flask import Flask, request app = Flask(__name__) @app.route('/', methods=['POST']) def event_handler(): # Extract the event payload from the request event = request.get_json() # Process the event based on its type or topic event_type = event.get('type') if event_type == 'analytics': # Trigger machine learning models for data analytics trigger_data_analytics(event) elif event_type == 'data-transformation': # Perform data transformations and updates in BigQuery process_data_transformation(event) else: # Handle unrecognized event types handle_unknown_event(event) # Return a response indicating successful processing return 'Event processed successfully' def trigger_data_analytics(event): # Implement your logic to trigger machine learning models for data analytics # You can access event data, perform computations, and invoke necessary ML models or services def process_data_transformation(event): # Implement your logic to perform data transformations and updates in BigQuery # You can access event data, apply transformations, and update data in BigQuery tables def handle_unknown_event(event): # Implement your logic to handle unrecognized event types # This could include logging, sending alerts, or performing fallback actions if __name__ == '__main__':'', port=8080)

  1. Emit Events. Integrate the event-driven ecosystem by emitting events from the respective components. This can be done by implementing event triggers or hooks within each component to publish events to the event handlers running on Cloud Run. For example, the legacy custom software can be enhanced to emit events whenever specific actions occur, SAP can trigger events on data changes and the mobile application backend can publish events on user interactions.


# Sample code snippet in Python # Perform some action in the legacy custom software # ... # Emit an event whenever a specific action occurs event_payload = { 'type': 'legacy-action', 'data': { 'event_details': 'Some relevant information about the action', # Include any additional data relevant to the event } } # Publish the event to the Pub/Sub topic publish_event('legacy-topic', event_payload)


* Sample code snippet in ABAP * Detect data changes in SAP * ... * Emit an event whenever a relevant data change occurs DATA: event_payload TYPE string. event_payload = `{"type": "sap-data-change", "data": {"event_details": "Some relevant information about the data change"}}`. * Publish the event to the Pub/Sub topic CALL FUNCTION 'PUBLISH_EVENT' EXPORTING topic = 'sap-topic' data = event_payload.

Mobile application (Firestore)

// Sample code snippet in JavaScript (Node.js) // Firestore trigger that detects user interactions exports.firestoreTrigger = (change, context) => { const documentId = context.params.documentId; const documentData =; // Emit an event on user interaction const eventPayload = { type: 'user-interaction', data: { documentId, documentData, // Include any additional data relevant to the event } }; // Publish the event to the Pub/Sub topic publishEvent('mobile-topic', eventPayload); };

  1. Orchestrate Workflows. Use workflow orchestration tools like Cloud Workflows or Google Cloud Composer to manage the flow and sequence of events across multiple services. This allows you to define complex workflows, dependencies and conditional logic between different event handlers and components.

# Sample workflow definition in YAML main: steps: - name: trigger-legacy-action call: args: url: https://legacy-service/trigger-action body: type: legacy-action data: event_details: Some relevant information about the action - name: trigger-sap-data-change call: args: url: https://sap-service/trigger-data-change body: type: sap-data-change data: event_details: Some relevant information about the data change - name: trigger-user-interaction call: args: url: https://mobile-backend/trigger-user-interaction body: type: user-interaction data: documentId: {{ workflow.variables.documentId }} documentData: {{ workflow.variables.documentData }} # Include any additional data relevant to the event - name: finalize-workflow call: args: url: https://final-service/complete-workflow body: workflowId: {{ }} status: success

  1. Connect with External Services. Use Cloud Run to connect with external services like Salesforce, Firestore and Workspace through their respective APIs. Cloud Run provides the flexibility to handle API requests, authenticate with the external services, and process the required data or actions.

# Sample code snippet in Python using Flask framework from flask import Flask, request, jsonify import requests app = Flask(__name__) @app.route('/handle-salesforce-event', methods=['POST']) def handle_salesforce_event(): # Extract the event payload from the request event_payload = request.json # Process the event payload or perform any required actions # ... # Make a request to the Salesforce API response = requests.get('', headers={'Authorization': 'Bearer YOUR_ACCESS_TOKEN'}) # Handle the response from the Salesforce API # ... # Return a response to the caller return jsonify({'message': 'Event handled successfully'}) if __name__ == '__main__':

By leveraging Cloud Run's event-driven architecture, you can create a cohesive ecosystem that seamlessly integrates custom legacy software, business software, machine learning, data analytics, external CRM systems, mobile application backends and collaboration tools. The event-driven approach allows for loosely coupled components, scalability and flexibility in managing complex workflows and interactions within the ecosystem.

7. Harness the power of event-driven architecture

In modern application development, serverless containers provide the perfect blend of agility and resource efficiency. Discover the potential of serverless containers and Cloud Run. Simplify operations, enjoy seamless integration and harness the power of event-driven architecture.

9 views0 comments

Recent Posts

See All

Pattern Recognition with Event Sourcing

1. Definition and purpose An approach to analyzing event sequences is pattern recognition. By analyzing sequences of events, patterns and trends can be identified. Understanding the underlying behavio

Subject reactive components

1. Definitions and introduction Eventing systems play a role in today's software world, enabling organizations to build highly scalable, loosely coupled and event-driven architectures. These systems f


bottom of page