Stay organized with collections
Save and categorize content based on your preferences.
Last reviewed 2025-01-23 UTC
Running workloads in the cloud requires that clients in some scenarios have
fast and reliable internet connectivity. Given today's networks, this
requirement rarely poses a challenge for cloud adoption. There are, however,
scenarios when you can't rely on continuous connectivity, such as:
Sea-going vessels and other vehicles might be connected only
intermittently or have access only to high-latency satellite links.
Factories or power plants might be connected to the internet. These
facilities might have reliability requirements that exceed the availability
claims of their internet provider.
Retail stores and supermarkets might be connected only occasionally or
use links that don't provide the necessary reliability or throughput to
handle business-critical transactions.
The edge hybrid architecture pattern addresses these challenges by running
time- and business-critical workloads locally, at the edge of the network, while
using the cloud for all other kinds of workloads. In an edge hybrid
architecture, the internet link is a noncritical component that is used for
management purposes and to synchronize or upload data, often asynchronously, but
isn't involved in time or business-critical transactions.
Advantages
Running certain workloads at the edge and other workloads in the cloud offers
several advantages:
Running workloads that are business- and time-critical at the edge helps
ensure low latency and self-sufficiency. If internet connectivity fails or
is temporarily unavailable, you can still run all important transactions.
At the same time, you can benefit from using the cloud for a significant
portion of your overall workload.
You can reuse existing investments in computing and storage equipment.
Over time, you can incrementally reduce the fraction of workloads that
are run at the edge and move them to the cloud, either by reworking certain
applications or by equipping some edge locations with internet links that
are more reliable.
Internet of Things (IoT)-related projects can become more cost-efficient
by performing data computations locally. This allows enterprises to run and
process some services locally at the edge, closer to the data sources. It
also allows enterprises to selectively send data to the cloud, which can
help to reduce the capacity, data transfer, processing, and overall costs
of the IoT solution.
Edge computing can act as an
intermediate communication layer
between legacy and modernized services. For example, services that might be
running a containerized API gateway such as Apigee hybrid). This
enables legacy applications and systems to integrate with modernized
services, like IoT solutions.
Best practices
Consider the following recommendations when implementing the edge hybrid
architecture pattern:
If the solution consists of many edge remote sites connecting to
Google Cloud over the public internet, you can use a software-defined
WAN (SD-WAN) solution. You can also use
Network Connectivity Center
with a third-party SD-WAN router supported by a
Google Cloud partner
to simplify the provisioning and management of secure connectivity at scale.
Minimize dependencies between systems that are running at the edge and
systems that are running in the cloud environment. Each dependency can
undermine the reliability and latency advantages of an edge hybrid setup.
To manage and operate multiple edge locations efficiently, you should
have a centralized management plane and monitoring solution in the cloud.
Ensure that CI/CD pipelines along with tooling for deployment and
monitoring are consistent across cloud and edge environments.
Consider using containers and Kubernetes when applicable and feasible,
to abstract away differences among various edge locations and also among
edge locations and the cloud. Because Kubernetes provides a common runtime
layer, you can develop, run, and operate workloads consistently across
computing environments. You can also move workloads between the edge and
the cloud.
To simplify the hybrid setup and operation, you can use
GKE Enterprise
for this architecture (if containers are used across the environments).
Consider
the possible connectivity options
that you have to connect a GKE Enterprise cluster
running in your on-premises or edge environment to Google Cloud.
As part of this pattern, although some GKE Enterprise
components might sustain during a temporary connectivity interruption to
Google Cloud, don't use GKE Enterprises when it's
disconnected from Google Cloud as a nominal working mode. For more
information, see
Impact of temporary disconnection from Google Cloud.
To overcome inconsistencies in protocols, APIs, and authentication
mechanisms across diverse backend and edge services, we recommend, where
applicable, to deploy an API gateway or proxy as a unifying
facade.
This gateway or proxy acts as a centralized control point and performs the
following measures:
Implements additional security measures.
Shields client apps and other services from backend code changes.
Facilitates audit trails for communication between all
cross-environment applications and its decoupled components.
Apigee and
Apigee Hybrid
let you host and manage enterprise-grade and hybrid gateways
across on-premises environments, edge, other clouds, and
Google Cloud environments.
Establish common identity
between environments so that systems can authenticate securely across
environment boundaries.
Because the data that is exchanged between environments might be
sensitive, ensure that all communication is encrypted in transit by using
VPN tunnels,
TLS,
or both.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-01-23 UTC."],[[["\u003cp\u003eEdge hybrid architecture allows for running time- and business-critical workloads locally at the edge while utilizing the cloud for other workloads, addressing challenges posed by intermittent or unreliable internet connectivity.\u003c/p\u003e\n"],["\u003cp\u003eThis architecture ensures low latency and self-sufficiency for essential transactions, even during internet outages, while still leveraging cloud benefits for a significant portion of overall workload.\u003c/p\u003e\n"],["\u003cp\u003eImplementing edge hybrid can improve cost efficiency, particularly for IoT projects, by enabling local data processing and selective cloud data transfer.\u003c/p\u003e\n"],["\u003cp\u003eIt's best to minimize dependencies between edge and cloud systems to maximize reliability and latency advantages, while also employing a centralized management and monitoring solution in the cloud for efficient operation.\u003c/p\u003e\n"],["\u003cp\u003eUtilizing containers and Kubernetes, along with a unified API gateway, helps maintain consistency across diverse edge locations and between edge and cloud environments.\u003c/p\u003e\n"]]],[],null,["# Edge hybrid pattern\n\nRunning workloads in the cloud requires that clients in some scenarios have\nfast and reliable internet connectivity. Given today's networks, this\nrequirement rarely poses a challenge for cloud adoption. There are, however,\nscenarios when you can't rely on continuous connectivity, such as:\n\n- Sea-going vessels and other vehicles might be connected only intermittently or have access only to high-latency satellite links.\n- Factories or power plants might be connected to the internet. These facilities might have reliability requirements that exceed the availability claims of their internet provider.\n- Retail stores and supermarkets might be connected only occasionally or use links that don't provide the necessary reliability or throughput to handle business-critical transactions.\n\nThe *edge hybrid* architecture pattern addresses these challenges by running\ntime- and business-critical workloads locally, at the edge of the network, while\nusing the cloud for all other kinds of workloads. In an edge hybrid\narchitecture, the internet link is a noncritical component that is used for\nmanagement purposes and to synchronize or upload data, often asynchronously, but\nisn't involved in time or business-critical transactions.\n\nAdvantages\n----------\n\nRunning certain workloads at the edge and other workloads in the cloud offers\nseveral advantages:\n\n- Inbound traffic---moving data from the edge to Google Cloud---[might be free of charge](/vpc/network-pricing#general).\n- Running workloads that are business- and time-critical at the edge helps ensure low latency and self-sufficiency. If internet connectivity fails or is temporarily unavailable, you can still run all important transactions. At the same time, you can benefit from using the cloud for a significant portion of your overall workload.\n- You can reuse existing investments in computing and storage equipment.\n- Over time, you can incrementally reduce the fraction of workloads that are run at the edge and move them to the cloud, either by reworking certain applications or by equipping some edge locations with internet links that are more reliable.\n- Internet of Things (IoT)-related projects can become more cost-efficient by performing data computations locally. This allows enterprises to run and process some services locally at the edge, closer to the data sources. It also allows enterprises to selectively send data to the cloud, which can help to reduce the capacity, data transfer, processing, and overall costs of the IoT solution.\n- Edge computing can act as an [intermediate communication layer](/solutions/unlocking-legacy-applications#section-3) between legacy and modernized services. For example, services that might be running a containerized API gateway such as Apigee hybrid). This enables legacy applications and systems to integrate with modernized services, like IoT solutions.\n\nBest practices\n--------------\n\nConsider the following recommendations when implementing the edge hybrid\narchitecture pattern:\n\n- If communication is unidirectional, use the [gated ingress pattern](/architecture/hybrid-multicloud-secure-networking-patterns/gated-ingress).\n- If communication is bidirectional, consider the [gated egress and gated ingress pattern](/architecture/hybrid-multicloud-secure-networking-patterns/gated-egress-ingress).\n- If the solution consists of many edge remote sites connecting to Google Cloud over the public internet, you can use a software-defined WAN (SD-WAN) solution. You can also use [Network Connectivity Center](/network-connectivity/docs/network-connectivity-center/concepts/ra-overview) with a third-party SD-WAN router supported by a [Google Cloud partner](/network-connectivity/docs/network-connectivity-center/partners) to simplify the provisioning and management of secure connectivity at scale.\n- Minimize dependencies between systems that are running at the edge and systems that are running in the cloud environment. Each dependency can undermine the reliability and latency advantages of an edge hybrid setup.\n- To manage and operate multiple edge locations efficiently, you should have a centralized management plane and monitoring solution in the cloud.\n- Ensure that CI/CD pipelines along with tooling for deployment and monitoring are consistent across cloud and edge environments.\n- Consider using containers and Kubernetes when applicable and feasible, to abstract away differences among various edge locations and also among edge locations and the cloud. Because Kubernetes provides a common runtime layer, you can develop, run, and operate workloads consistently across computing environments. You can also move workloads between the edge and the cloud.\n - To simplify the hybrid setup and operation, you can use [GKE Enterprise](/anthos/docs/concepts/gke-editions) for this architecture (if containers are used across the environments). Consider [the possible connectivity options](/anthos/clusters/docs/bare-metal/latest/concepts/connect-on-prem-gcp) that you have to connect a GKE Enterprise cluster running in your on-premises or edge environment to Google Cloud.\n- As part of this pattern, although some GKE Enterprise components might sustain during a temporary connectivity interruption to Google Cloud, don't use GKE Enterprises when it's disconnected from Google Cloud as a nominal working mode. For more information, see [Impact of temporary disconnection from Google Cloud](/anthos/docs/concepts/anthos-connectivity).\n- To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backend and edge services, we recommend, where applicable, to deploy an API gateway or proxy as a unifying [facade](/apigee/resources/ebook/api-facade-pattern-register). This gateway or proxy acts as a centralized control point and performs the following measures:\n - Implements additional security measures.\n - Shields client apps and other services from backend code changes.\n - Facilitates audit trails for communication between all cross-environment applications and its decoupled components.\n - Acts as an [intermediate communication layer](/solutions/unlocking-legacy-applications#section-3) between legacy and modernized services.\n - Apigee and [Apigee Hybrid](/apigee/docs/hybrid/v1.10/what-is-hybrid) let you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments.\n- [Establish common identity](/architecture/authenticating-corporate-users-in-a-hybrid-environment) between environments so that systems can authenticate securely across environment boundaries.\n- Because the data that is exchanged between environments might be sensitive, ensure that all communication is encrypted in transit by using VPN tunnels, [TLS](/architecture/landing-zones/decide-security#option-2-require-layer7), or both."]]