+ All Categories
Home > Documents > The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV,...

The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV,...

Date post: 27-Jun-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
16
The Edge Opportunity — Platform Matters RESEARCH BRIEF Sponsored by
Transcript
Page 1: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

The Edge Opportunity — Platform Matters

RESEARC H BRIEF

Sponsored by

Page 2: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page ii

Table of Contents

The Edge Opportunity — Platform Matters

Introduction 1

The Edge Opportunity 1

5G and IoT as Edge Drivers 1

Benefits of Edge Computing 2

Edge Computing Use Cases and Applications 3

The Different Edges 5

Enterprise Campus 5

Remote Enterprise Branches and Retail 5

Sports, Entertainment and Other Public Venues 5

Next-Gen Central Office, Points of Presence Locations, Radio Base Stations 5

Offshore or Other Self-Contained Locations 6

Requirements for Edge Platforms 6

Environmental Needs 7

Reliability Considerations 7

Performance Issues 7

Security Considerations 8

Remote Configuration, Visibility, and Troubleshooting 8

Edge Platform is Not Just a Hardware Decision 8

Edge - The Software Angle 8

Infrastructure Platform and OS Decision 8

VM Support, Container Support 9

Integration Testing with Pre-Validated Stacks 9

Performance Enhancements — DPDK, SR-IOV and more 9

Manageability and Ease of Deployment 9

Autonomy 10

Security and Compliance 10

Page 3: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page iii

The Edge Opportunity — Platform Matters

Research Briefs are independent content created by analysts working for AvidThink LLC. These reports are made possible through the sponsorship of our commercial supporters. Sponsors do not have any editorial control over the report content, and the views represented herein are solely those of AvidThink LLC. For more information about report sponsorships, please reach out to us at [email protected]

About AvidThink™

AvidThink is a research and analysis firm focused on providing cutting edge insights into the latest in infrastructure technologies. Formerly SDxCentral’s research group, AvidThink launched as an independent company in October 2018. Over the last five years, over 110,000 copies of AvidThink’s research reports (under the SDxCentral brand) have been downloaded by 40,000 technology buyers and industry thought leaders. AvidThink’s expertise covers Edge and IoT, SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. Visit AvidThink at www.avidthink.com

The Right Partner for the Edge — What Matters? 10

Rich Platform Selection 10

Wide Geocoverage 11

Comprehensive Platform Certifications 11

Robust R&D Investment 11

Flexibility in Customization 11

Security Awareness 11

Comprehensive Support SLAs and Extended End-of-Life (EOL) Policies 12

Favorable Total Cost of Ownership (TCO) Metrics 12

Conclusion 12

The Edge Opportunity — Platform Matters

© Copyright 2019 AvidThink LLC, All Rights Reserved Page iii

Page 4: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 1

RE SE ARC H BRIEF

The Edge Opportunity — Platform Matters Introduction

The edge computing hype engine has been revving, with some in the business press touting edge as a new market that will potentially surpass the cloud market today. With two other hot trends, 5G and Internet of Things (IoT), tied into edge opportunities, there’s certainly no lack of exciting applications and capabilities the edge can bring. With rich use cases that span augmented and virtual reality (AR/VR), autonomous vehicles, gaming, remote surgery, and video processing and analytics, the edge appears to have no bounds.

In this AvidThink research brief, we’ll turn a critical eye on the edge opportunity, looking at the use cases, the software and hardware infrastructure and platform requirements for the edge. This brief comprises research conducted with the network equipment providers (NEPs) as well as subject matter experts at major communication service providers (CSPs) and application developers. For NEPs and system integrators, we aim to provide guidance on selecting the right platform partner for the edge computing market that is upon us.

The Edge Opportunity

With a market that is forecast to reach U.S. $28.8 billion by 2025 at a CAGR of 54%1, edge computing is hot. Our view though, which aligns with most CSPs, is that the edge is not an alternative to the cloud, as some postulate, but an augmentation or extension of the cloud. As edge computing rolls out, we anticipate a wide spectrum of different application services that run anywhere from the edge (customer premises, central office locations, and micro data centers near cell towers), to local data centers and to regional and national data centers.

Across these locations, there needs to be appropriate platforms that fit the requirements and unique needs — data center servers that work well in today’s large-scale data centers will need to be appropriately modified for edge deployments.

5G and IoT as Edge Drivers

5G is enabling a host of new applications for enterprises and consumers. While 5G dramatically increases speeds and feeds — 10+ Gbps peak data rates, 10-100 times more devices supported, less than 1 millisecond radio latency and reliability — its

We’ll use the term NEP to mean network solution providers that have both software and hardware offerings. We envision that some of these NEPs will build integrated edge platforms that CSPs can deploy as turnkey edge infra-structure. Certainly, we expect CSPs to also work directly with platform providers to procure families of platforms for deployment across edge locations (CPEs in offices, servers in central offices, and perhaps rugged micro data centers located near radio base stations). CSPs will want to build commoditized edge platforms to attract enterprise and edge application developers. At the same time, cloud providers and next-generation edge infrastructure players will attempt to deploy platforms on which these same developers and enterprises can create and run edge applications. The market here will evolve rapidly as multiple stakeholders attempt to make progress in this new land grab.

1 ”Edge Computing Market Size Worth $28.84 Billion By 2025“ https://www.grandviewresearch.com/press-release/global-edge-computing-market

Page 5: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 2

The Edge Opportunity — Platform Matters

true power lies in completely transforming the mobile communications network. The combination of the above improvements with network slicing (the end-to-end partitioning of network bandwidth with different performance, availability, and scalability characteristics) creates a whole range of new services connecting devices, users, and enterprises.

Speaking of devices, IoT promises many of them at the edge, with analyst firm IDC estimating 41.6B IoT connected devices by 20252 With these new applications and the proliferation of devices, CSPs worldwide have recognized the need for compute, storage, and networking infrastructure to be placed close to the locations where these applications are consumed. The benefits of reduced latency, improved throughput, better security, and isolation along with data reduction and context- and location-awareness make edge computing a compelling area of infrastructure investment for CSPs.

In this research brief, we will not delve into parsing out the differences in terms that define computing at the edge, including multi-access edge computing (MEC), fog computing, and distributed cloud. In general, these refer to slightly different architectures and coverage areas. However,

they are related architectures that involve adapting cloud designs from hyper-scale web companies to fit edge deployments. Sometimes, these edge platforms run segregated from central clouds, but in most cases they run in conjunction with public or private clouds located in large data centers. Figuring out these deployment architectures takes time and depends on the needs of each use case.

Benefits of Edge Computing

While cloud platforms have been remarkably successful over the last decade, these centralized data centers do not necessarily meet all application needs. This is where edge computing steps in, with the following unique benefits:

• Latency: The edge can provide latency in milliseconds or even less, while multiple hops and long data transmission distances mean that the latency today to the current “edge” of the network, usually in the form of a content distribution network (CDN), is in 50-150 ms range. Of course, latency to centralized data centers and the public cloud could be even greater

• High throughput: The throughput available to the user from the edge, served via cached or locally generated content, can be orders of magnitude greater than from a core data center or even the CDN.

• Data reduction: By running data analytics at the edge, operators and application vendors can cut down the amount of data that has to be sent upstream. This cuts costs and allows for more efficient use of available bandwidth.

• Context awareness: The edge has access to the radio network, and potentially user and location information provided by the radio area network (RAN); this information can be used by edge applications to personalize or customize responses.

• Security: CSPs can protect their networks against attacks from the user equipment (UE) or customer premise equipment (CPE) using edge applications, thereby stopping these attacks before they propagate further through the CSP infrastructure.

2 ”Worldwide Global DataSphere IoT Device and Data Forecast, 2019-2023” https://www.idc.com/getdoc.jsp?containerId=prUS45213219

The benefits of reduced latency, improved throughput, better security, and isolation along with data reduction and context- and location-awareness make edge computing a compelling area of infrastructure investment for CSPs.

Page 6: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 3

The Edge Opportunity — Platform Matters

• Resilience in isolation: A number of environments are not always connected to the internet over high-speed links. An edge cloud is able to provide services during periods of degraded or lost connections. Later we’ll discuss situations where edge platforms can effectively act as an isolated data center in locations that are permanently not served by significant connectivity (e.g., remote offshore locations).

• Compliance and privacy: Edge applications, which can handle local processing of data without transmitting that data across the state or national boundaries, can help with compliance with privacy or data location laws.

Edge Computing Use Cases and Applications

Today, CSPs are trying out different edge computing/MEC applications, as are other cloud providers and vendors attempting to penetrate the edge market. Many proofs of concept today are focused on CDN-type use cases, including video caching and transcoding, immersive videos (AR/VR), RAN disaggregation, RAN optimization, IoT, and analytics. Gaming is also viewed to be another potentially hot use case. As well, for CSPs, the edge becomes another host location for NFV workloads. However, moving from POCs to mainstream adoption of edge computing and MEC requires overcoming some major challenges. The main issues include working through the details of the underlying NFV platform, ensuring seamless mobility support, supporting multi-operator applications, addressing scalability and performance, and ensuring security and compliance. In addition, application providers and NEPs must also address the software and hardware infrastructure at these edge locations.

For NEPs working with CSPs, many of the operator applications likely fall into the categories of caching, analytics, compliance, security, and general NFV.

Caching and Transcoding

For operators that provide video-on-demand services, either directly or with partners, the edge is an obvious location to place content caches. Reducing the latency in streaming video is certainly a plus, but the main value is a reduction in

Low Latency

High Throughput

Data Reduction

Context Awareness

Security

ResilienceCompliance and Privacy

Benefits of Edge Computing Platforms

Page 7: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 4

The Edge Opportunity — Platform Matters

bandwidth utilization over transport and backhaul links. By keeping popular content close to the subscribers and end users, the CSP can preserve network capacity. Moreover, by providing transcoding services near the edge, the CSP can transform data into the right form and at the right bitrate near the subscriber, contributing to lower jitter, loss, and a higher quality of experience for the end user.

Analytics

The edge collects a large amount of data about users, network conditions, local context, and consumer behavior that can be invaluable to operational support systems (OSS) and business support systems (BSS) applications. Sending all of this data to the core may be counterproductive for two reasons: high latency and wasted bandwidth. In many closed-loop automation situations, such as performance degradation requiring corrective action within milliseconds, letting a centralized application drive closed-loop automation is just not practical. Secondly, the amount of data generated by edge devices and functions can be substantial. It is much more cost-effective to run analytics at the edge and send small batches of processed information to the core. Analytics may include a range of activities such as event correlation, big data applications, IoT data processing, video analytics, and machine learning.

Compliance

Compliance consists of a wide variety of applications that could range from copyright enforcement to geographical data placement. Copyright enforcement comes in play during concerts, plays, and sports events where an audience member does not have the right to transmit the event via their cell phone. An edge application could either disable the upstream transmission completely or reduce the resolution to make the transmission compliant. Geographic placement becomes relevant when, by law, a certain piece of data has to reside in a particular geography. Edge applications can enforce these laws and add value by processing the data locally, within the appropriate jurisdiction.

Security

Generally, CSPs have protected themselves against attacks from the internet. However, recent cyberattacks have demonstrated that as UE, IoT, and CPE devices become more sophisticated, attacks could be mounted from inside. Edge computing allows for applications such as DDOS and cybersecurity to prevent these types of attacks and moves the security perimeter closer to the source. Another aspect of security is related to the network slicing concept from 5G. By leveraging edge platforms to run private LTE services, it is possible to create effective end-to-end slices for an enterprise that is insulated from other traffic.

Virtualized RAN and Generalized NFV Platform

Given the location of edge platforms, one likely function that will fit well are elements of a virtualized RAN. Just as we have disaggregated other network functions, the RAN can be broken into different components, including the Centralized Unit (CU), Distributed Unit (DU) and of course, the radio units. CU and DU functions can be run on edge platforms, taking advantage of commodity platforms and flexible component configurations. In addition to running virtualized and disaggregated RAN functions at the edge, there are likely other telco functions that could be hosted as well. While these network functions are not, strictly speaking, an edge application, they will probably want to run in the same locations as edge computing applications. Beyond the virtualized RAN, other NFV candidates include virtual evolved packet cores (vEPC), virtual broadband network gateways (vBNG), virtual cable modem termination systems (vCMTS), and virtual optical line terminals (virtual OLT). In fact, the European Telecommunications Standards Institute (ETSI) MEC approach is highly synergistic to the ETSI NFV approach.

Third-Party Edge Applications

Ultimately, we believe that third-party applications will unleash network innovation and new services, similar to how third-party applications revolutionized smartphones. CSPs will, therefore, want to open up the ecosystem at the edge,

Page 8: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 5

The Edge Opportunity — Platform Matters

enabling platforms and APIs that encourage third-party application developers to create new applications that take advantage of the unique nature of the edge platform.

Some of these new applications allow for more immersive, real-time experiences that handle large amounts of data locally without clogging up the bandwidth pipes into the public clouds. Other applications like gaming can also benefit from the lower latencies near the edge, improving the real-time feel of online games.

The Different Edges

Having explored the different classes of applications that can run at the edge, we’ll turn our attention to the wide spectrum of locations that could be classified as the edge. Again, we’ll want to emphasize that there isn’t a clear delineation of where the central cloud ends and the edge begins — it’s a continuum, and the edge should be viewed as an extension of centralized cloud platforms.

Enterprise Campus

With the recent rise of software-defined WAN (SD-WAN) solutions for enterprises, and increasing acceptance of CSP-managed CPE platforms at the enterprise edge, CSPs could host other services on these platforms. The CPE within an enterprise location is a viable platform to run operator services such as IoT gateways, or even vEPC stacks for private LTE services. If an enterprise wants to provide connectivity directly from the 5G RAN to the enterprise for security reasons, the edge provides an excellent platform to terminate this service. Even cloud providers like AWS offer their cloud software running on local infrastructure — AWS Outpost and Greengrass, for example, or Azure IoT Edge — in essence making the enterprise CPE an edge platform.

Remote Enterprise Branches and Retail

The enterprise remote branch edge is also an excellent location to provide branch connectivity and other enterprise services. As mentioned earlier, the rise of SD-WAN has made available CSP-managed CPEs to host SD-WAN functions, as well as firewall and security functions or WAN optimization functions. In addition, these CPEs located in branches like retail locations, can run a series of connectivity-related applications like unified communication and collaboration (UCC), facial recognition for local, personalized advertising, and AI-powered surveillance for security and retail shrinkage prevention.

Sports, Entertainment and Other Public Venues

A somewhat related self-contained use case is locations offering small-cell services such as stadiums, concerts, airports, places of worship, universities, and smart buildings where the edge can offer local services. For example, an edge application could allow stadium viewers to watch a game from numerous perspectives and offer them personalized high-definition content without burdening upstream bandwidth.

Next-Gen Central Office, Points of Presence Locations, Radio Base Stations

Both wireline and wireless CSPs have physical location assets from their existing lines of businesses. Many of these locations are aggregation points for their existing connectivity services, whether a CO for POTS subscribers or a cell tower and base station for wireless users. These locations are already CSP controlled and close to the end users, making them ideal for edge platform placement. There may be form factor and power constraints, particularly in these remote edges, which we’ll discuss in more detail.

The CPE within an enterprise location is a viable platform to run operator services such as IoT gateways, or even vEPC stacks for private LTE services.

Page 9: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 6

The Edge Opportunity — Platform Matters

Offshore or Other Self-Contained Locations

In an age when there is universal connectivity, it seems strange that there are actually environments with intermittent, highly constrained, or no connectivity. These could include cruise ships, planes, mines, farms, oil rigs, trains, pipelines, wind farms, solar power plants, and power grids. Edge computing is critical to roll out new services to these environments. The data from these locations could ultimately be synchronized with the cloud when connectivity is available, such as when a ship docks or plane lands. In the meantime, having a standardized edge platform in these places allows for both CSPs and other application providers to build out new applications cost-effectively.

Requirements for Edge Platforms

The edge compute environment is going to unleash a wide variety of new hardware and software platforms. For these mini or micro data centers, which exist in areas that previously didn’t house generalized compute servers, there are a whole host of new considerations. Plopping a standard 2RU 19-inch rack-mounted x86 server in a remote central office isn’t always going to work.

As we examine the numerous platform requirements, we’ll also note that there isn’t really a standard edge platform. Standardization of the Edge Computing/MEC platform architecture is an ongoing process with standards bodies and consortiums such as the ETSI MEC ISG, the OpenFog Consortium (IEEE blessed), the Open Edge Computing Initiative, and Kinetic Edge Alliance. In addition, there are open-source hardware groups, like the Open Compute Project, and open-source software foundations, such as the Linux Foundation with LF Edge and the OpenStack Foundation, who are using the vehicle of collaborative open source to bring about some standardization to edge platforms. In any case, sorting out the edge standards and open-source stacks takes some time, but as an industry, we already understand some fundamental requirements. We’ll examine the key elements here.

Enterprise Campus

Branches/Retail

Sports, Entertainment Venues

Central Office

Cellular Base Stations

Offshore Rigs, Cruise Ships, Planes, etc.

Common Edge Locations

Page 10: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 7

The Edge Opportunity — Platform Matters

Environmental Needs

Certain edge locations like older COs do not have the power and cooling capabilities commonly found in a data center. In these locations, the edge platform needs to accommodate wider temperature ranges and operate with lower power. Likewise, the racking or placement of servers in these locations might result in more mechanical jostling. In these types of environments, solid-state media and robust mechanicals may be required. There may also be space constraints in far edge locations near cellular towers, driving non-standard form factors.

Reliability Considerations

Edge locations might be expensive or difficult to get to and ensuring that the overall system has a high mean time between failures (MTBF) is important. The hardware platform should be engineered to last longer than in regular data centers. We’ve already pointed out the preference for solid-state media in these locations, so having backup or redundant components might be necessary to ensure uptime while a replacement unit or component is being shipped out.

Performance Issues

Locations at the edge tend to have a smaller power budget available for the server platforms. At the same time, these edge servers are expected to process I/O at a high rate. For instance, running virtualized RAN deployments might require non-CPU compute elements like digital signal processors (DSPs) or field-programmable gate arrays (FPGAs). To alleviate this, architectures assisted by hardware acceleration, either onboard or via a network interface card (NIC) might be in order. Furthermore, one of the more popular use cases for these edge platforms is video processing and analysis. Beyond FPGA-based smart NICs, GPUs might become a critical component of these platforms. For the same power budget, GPUs could provide more video processing horsepower than a generalized CPU and could be a key component of the edge platform.

Environment Needs• Temperature and humidity• Mechanical robustness• Form factor

Reliability• High MTBF components• Extended support

components• Local redundancy

Performance• High I/O throughput• FPGA/GPU assisted

Configuration and Troubleshooting• Secure remote configuration• Preemptive notifications• Ongoing monitoring

Requirements for Edge Platforms

Page 11: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 8

The Edge Opportunity — Platform Matters

Security Considerations

Some edge locations are not physically secure, and CSPs who have run early edge POCs are sensitive to servers being stolen or compromised. On the hardware side, looking for secure trusted platform modules (TPM) or Secure Enclave mechanisms that can store critical key data might be necessary. Tamper-detection capabilities on the hardware, e.g., setting a hardware and software flag if the server is opened, is important if sensitive data, like healthcare, needs to be processed on the edge platform. We’ll later also discuss software platform needs to ensure security in the face of physical compromise to the server.

Remote Configuration, Visibility, and Troubleshooting

As discussed earlier, due to the sometimes hard-to-reach nature of edge locations, the hardware platform needs to provide strong troubleshooting and monitoring capabilities. Ideally, preemptive notification of potentially failing hardware modules would be possible and issues fixed even before they occur. Similar to the baseboard management controllers in data center servers, the edge platforms need remote management capabilities that allow troubleshooting, remediation, and remote reset if an edge platform is not functioning as expected. Further, advanced capabilities for these edge platforms, including secure remote hardware provisioning and configuration, through protocols like Redfish (replacing legacy intelligent platform management interface, or IPMI, standards) over wide area networks will likely be required.

Edge Platform is Not Just a Hardware Decision

When the industry discusses edge platforms, we tend to talk about ruggedization, power consumption, and other hardware needs. However, the edge platform consists of a combination of both hardware and software. The infrastructure software supporting edge applications is just as important a decision as the management and orchestration.

Edge - The Software Angle

For a CSP looking to pick a universal platform, or a NEP looking to create a white-box-based edge solution, the edge promises to be a much larger challenge than centralized data centers. For one, data center servers are usually in close proximity to each other, linked together with high-speed local network connections. Even in lights out data centers, IT personnel is usually close at hand. Edge platforms will be deployed in numbers of at least one or two orders of magnitude more than existing data center servers, and many in locations expensive to access.

Infrastructure Platform and OS Decision

From the software-side, picking the right platform involves ensuring that the selected operating system supports the underlying hardware. Generally speaking, most edge platforms run on a Linux variant, with some using a BSD Unix-based system instead. There may also be occasions where the edge platform boots into a lightweight hypervisor, or a minimal OS, to reduce the software overhead.

In addition, from an OS standpoint, any unique hardware capabilities — e.g., FPGA-based acceleration or GPU — on the edge platform including configuration, provisioning, and firmware-related elements need to have adequate integration into the operating system or hypervisor.

As edge platforms evolve, AvidThink expects to see hybrid edge platforms capable of running both containers and VMs — a similar situation to carrier NFV in the core.

Page 12: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 9

The Edge Opportunity — Platform Matters

Beyond the operating system, the underlying edge platform stack usually involves a hypervisor to host VMs or a container run-time or a combination of both. In current deployments, a virtualized infrastructure manager like OpenStack may come into play. While in others, an edge stack — like LF Edge Akraino or OpenStack Foundation’s StarlingX or a lighter-weight Edge Virtualization Engine — or other commercial and open-source alternatives run on the platform. To manage these remote locations as a large scale distributed computing platform, a combination of OpenStack and/or Kubernetes will be used to manage the VMs and containers in these locations. At this time though, efforts are undeway to enhance Kubernetes to manage large numbers of distributed locations; after all, Kubernetes was originally developed for clusters running in a single data center with fast LAN links between servers. Early indications are that treating an entire set of thousands or more of nodes as a single cluster is not necessarily the right approach.

The reality today is that most edge deployments are POCs. Field trials are underway but will have to evolve and mature in the future to support large scale production-grade deployments.

VM Support, Container Support

As edge platforms evolve, AvidThink expects to see hybrid edge platforms capable of running both containers and VMs — a similar situation to carrier NFV in the core. VNF vendors and edge application developers span the spectrum of software development maturity. Some are already shipping container-based cloud-native applications, while others may continue to package their solutions as VMs. And as mentioned earlier, Kubernetes and OpenStack (or other management platforms) will likely need to be enhanced to manage hybrid environments. Furthermore, these environments will be expected to support third-party applications and enterprise applications--edge is, after all, supposed to be a generalized application platform--and will need to solve problems associated with resource contention, as well as security and isolation.

Integration Testing with Pre-Validated Stacks

Regardless of which edge stack and operating system are selected, the CSP or NEP need to ensure that the edge stacks are adequately tested on the selected edge platform. Just as the Linux Foundation’s OPNFV project ties together ongoing CI/CD-type testing on NFV platforms, the CSP or NEP needs to ensure that the combination of hardware and software is adequately tested. Again, this is true regardless of whether an open-source platform or its commercial derivative is selected, or a pure proprietary edge stack is chosen.

Performance Enhancements — DPDK, SR-IOV and more

From a software perspective, the platform needs to support any underlying acceleration mechanisms. Whether x86 or ARM platforms, software-based acceleration techniques such as DPDK or FD.io’s VPP needs to be supported by the software. In certain situations, when more direct access to the hardware is needed, SR-IOV or even PCI-passthrough need to be supported by the combination of the edge platform and the software stack running on it. We would expect the need for high-throughput I/O at the edge, so I/O handling requires acceleration, either built into the CPU chipset or via external mechanisms — FPGA, CPU, NPU, secondary ASICs. These require software and OS support to ensure the edge applications can take full advantage of the acceleration capabilities.

Manageability and Ease of Deployment

Comprehensive management and orchestration solutions are required to manage a very large number of edge compute instances. For ease of deployment, low-touch capabilities, similar to zero-touch provisioning (ZTP) efforts with SD-WAN, will be needed. Edge computing deployments will likely have one or two orders of magnitude more nodes than in SD-WAN, and having a robust touch-less bring-up and provisioning process is critical for fast and successful roll-out. Further, since these platforms will be running NFV workloads, there will need to be coordination between edge application management with NFV management and orchestration — it’s likely these will converge into a single orchestration system in due course.

Page 13: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 10

The Edge Opportunity — Platform Matters

Further, edge applications span multiple tiers, all the way from on-premise to the cloud. Application developers need tools to distribute their applications across different locations based on latency, performance, cost, and other parameters. The expectation is that the software stack running on the edge platform will provide app developers with sophisticated and location-specific packaging, deployment, and distribution capabilities.

Autonomy

One of the unique issues at the edge is that connectivity may not always be guaranteed. Software at the edge needs to support disconnected operation. Autonomy ensures that edge environments continue to deliver the required services and functionality even when external connectivity or services may be failing. This means that decision-making cannot be centralized, and the edge environment must have the intelligence it needs in order to continue uninterrupted operation on its own. This requires automation of a number of tasks such as discovery, orchestration, security, operations, and management.

Security and Compliance

The edge computing software stack has to be secure. Both ETSI MEC and OpenFog describe security requirements in detail. Security can be approached from a persona point of view: user, network operator, third-party app provider, app developer, content provider, platform vendor, IoT device. Or it can be approached from an attribute point of view: privacy, integrity, trust, attestation, verification, measurement. In addition to security, there are also compliance considerations such as allowing lawful interception, ensuring that only applications authorized to access certain information are able to do so, and maintaining data location as per local regulations. Much of this will likely be a software platform consideration than a hardware consideration.

Having now examined both hardware and software requirements for the edge, we’ll turn our attention to finding and evaluating the right platform partner for the edge.

The Right Partner for the Edge — What Matters?

Whether NEP or CSP, the edge is an area of active exploration, and while the standards and software stacks develop, there is already a mandate for many forward-looking organizations to start laying edge platforms to run POCs and trials. With the requirements above in mind, we’ll discuss key attributes that matter in selecting a platform partner for edge-ready NEPs and CSPs

Rich Platform Selection

As we’ve discussed, there isn’t one edge, but multiple edges across a continuum. Likewise, there’s likely not a single edge platform, but a family of platforms. The right platform partner will have rich options across CPU, storage, and networking. In addition, the power and cooling need for the edge is likely more complicated, as is the ability to support redundancy and design for increased ruggedness and reliability. The platform partner should be able to provide alternate components, such as MIL-SPEC components, that are designed to operate well in harsh environments.

In addition, given the variations in environments at the edge, the platform form factor will also vary: full-depth, half-depth, half-width and other CPE-type sizes will likely come into play to accommodate unique locations that aren’t standard data centers with 19” racks. In particular, if we have edge deployments at cell sites, the form factor would have to be relatively compact, and the compute requirements could include acceleration capabilities in the forms of DSP, GPUs or FPGAs, to provide appropriate compute capabilities in a limited power and space budget. As such, the platform partner needs to have the sophistication to accommodate different designs and the manufacturing clout to push these out cost-effectively.

Page 14: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 11

The Edge Opportunity — Platform Matters

Another element to evaluate is what type of remote management and lights-out operations capabilities the platform partner can provide, and how secure is this management. This is of particular concern since the platforms will be distributed across varied locations in the network.

Wide Geocoverage

With the edge, locations are likely to span the globe. It’s therefore important to evaluate the partner presence in different countries and their capabilities, including drop-shipping units, local service depots, and sophistication in logistics and handling international customs. A partner that already has established support centers, supply chains, and distribution hubs worldwide is likely a better candidate for an edge platform than one that is new to the game. Further, an NEP or CSP probably doesn’t want to handle RMAs and returns directly. This is where a platform partner with reverse logistics capabilities can add value: handling local sparing, managing local inventory, RMA, and even performing failure diagnostics. While the CSP or NEPs might still have to get involved in figuring out the root cause, having a partner who can help with RMAs and ascertain that the issue is not a hardware problem can reduce the complexity for the NEPs.

Comprehensive Platform Certifications

Related to global deployments, a major issue for CSPs and NEPs is that of certification. A platform partner that has in-house capabilities to manage the certification process across major global standards, e.g., CE, FCC, etc., can certainly ensure that the edge platforms will not run into compliance and custom issues when shipped to remote locations. Ideally, the partner can provide a selection of precertified platforms that are already configured for edge deployments, with the CSP or NEP simply varying options like storage capacity.

Robust R&D Investment

With the rate of innovation at the edge, the appropriate partner needs to have a robust investment in research and development to keep up with the latest changes and learnings. The edge platform will evolve over the course of the next few years as the use cases we described earlier come to fruition. A partner that is sizable, with the ability to invest in platform innovation, ensures that the CSP and NEP have competitive choices for platforms in the future.

Flexibility in Customization

One of the striking things about edge use cases, as we discussed, is their variety. Depending on the specific use cases that the CSP or NEP desire to support, the platform might need to be modified. For instance, more memory and local storage if targeting a CDN and perhaps the inclusion of GPUs if targeting localized AI or video analytics services. The right partner should have the flexibility to change key components and incorporate PCIe cards with FPGAs, GPUs, or accelerated NICs based on target use cases.

Security Awareness

While the OS and application in the software stack are owned and managed by the CSP or NEP, there’s a good amount of software in the onboard management system. A robust platform partner can ensure that they have a safe and secure platform by providing a secure boot environment and patching UEFI or any baseboard management capabilities. Likewise, a partner who can understand the importance of TPMs and secure enclaves and provide guidance on how best to leverage those is favored.

A partner with scale is more likely to have extensive relationships with component suppliers, including CPU manufacturers, resulting in deep insight into their roadmaps.

Page 15: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

© Copyright 2019 AvidThink LLC, All Rights Reserved Page 12

The Edge Opportunity — Platform Matters

Comprehensive Support SLAs and Extended End-of-Life (EOL) Policies

With the remote nature of many edge platforms, there are two elements that stand out on this topic. First, it’s likely that their replacement cycles might run longer than servers in more centralized data centers, and therefore the edge platforms need to be supported for a longer duration. Second, because the cost of failure is much higher at the edge — for example, if the platform is located on an oil rig, our typical “truck roll” turns into a “helicopter sortie”— revision controls and verification of components and firmware need to be extremely tight.

As such, ensuring that the platform partner has rigorous processes on managing the lifecycle of components on the bill of materials (BOM) should be part of partner evaluation. Likewise, ensuring that the partner supports extended EOL policies is useful for edge platforms in particular. A strong platform partner has the expertise to provide appropriate guidance on component selection to ensure lifespans consistent with edge deployments. Here, a partner with scale is more likely to have extensive relationships with component suppliers, including CPU manufacturers, resulting in deep insight into their roadmaps and more informed guidance.

Favorable Total Cost of Ownership (TCO) Metrics

At the end of the day, money talks, and a platform partner who fulfills all of the criteria above but cannot deliver the appropriate price/performance for edge platforms will still not qualify. Edge platforms need to be cost-effective, especially given the numbers we anticipate being deployed. However, cost numbers need to be taken into context and the TCO is an important metric that considers the hardware platform costs as well as any software and integration costs. Platform partners with scale have an advantage here, as they are the ones who likely have resources to perform integration testing of relevant operating systems and edge stacks to reduce deployment costs. Likewise, these same players can take advantage of economies of scale and probably have sizable existing platform businesses that have driven them down the cost curve.

Conclusion

The edge computing market promises to be a sizable opportunity for all concerned: application developers, platform and infrastructure providers, NEPs, and CSPs. As we roll out edge use cases while dreaming up unique applications for this new platform, we need to realize that these use cases and the type of deployments have implications for the underlying platform. Whether software or hardware, the edge brings different needs from those of the data center. Choosing the right platform partner can be a critical element to success at the edge. In the end, as with past technology deployments across enterprise campuses, in cloud data centers, within carrier cores, and now across the edge, we need to realize that the platform does matter.

Page 16: The Edge Opportunity — Platform Matters · SD-WAN, cloud and containers, SDN, NFV, hyper-convergence and infrastructure applications for AI/ML and security. ... In this AvidThink

Rev A

© Copyright 2019 AvidThink, LLC, All Rights Reserved This material may not be copied, reproduced, or modified in whole or in part for any purpose except with express written permission from an authorized representative of AvidThink, LLC. In addition to such written permission to copy, reproduce, or modify this document in whole or part, an acknowledgement of the authors of the document and all applicable portions of the copyright notice must be clearly referenced. All Rights Reserved.

AvidThink, LLC1900 Camden AveSan Jose, California 95124 USAavidthink.com


Recommended