How Edge Computing is Redefining Cloud Technologies

How Edge Computing is Redefining Cloud Technologies

Christopher Bergey, Senior VP of Devices Products at Western Digital, spoke at Compuforum 2018 on “Edge Computing – The Future of Memory and Cloud” on June 6th in Taipei. This is the fifth year Western Digital has been invited by Trendforce to share thoughts on market trends and outlooks with nearly 300 attendees. This year’s focus was on IoT. Read on to learn why edge computing is redefining cloud technologies and why advances such as AI, 5G, and the complexity of data are driving the need for more storage at the edge.

Bergey Edge Computing Compuforum
Christopher Bergey spoke at Compuforum 2018 on “Edge Computing – The Future of Memory and Cloud”

While cloud computing has been touted as the perfect solution for big data, recent developments in artificial intelligence (AI), machine learning and the promise of future 5G connectivity are dramatically changing how data is created, processed, stored and utilized. In addition to the volume of data that’s being created at an astonishing rate, the complexity of the data, in terms of content and context, is pushing the limits of today’s cloud framework. Edge computing, including innovation in sensors, clients, gateways, and the interoperation among them, is changing the game and reshaping the future of cloud and memory technologies and the valuable data within.

Data is everywhere. It’s multiplying as we speak. Much of today’s economy is focused on unleashing the power of this data. In Accenture and Western Digital’s “Dawn of the Data Marketplace” report, hot off the presses, it is estimated that by 2030 there will be 100 billion IoT devices in operation. These IoT devices live at the edge and collect huge volumes of data. I recently spoke at IoT World about how much of this data is untapped and how to get more value from IoT data collected at both the edge and the cloud.

Why the Rise of Edge Computing?

Industry analyst Bob O’Donnell blogged about how edge computing is reshaping the cloud and how we compute. In a nutshell he explained how the capabilities cloud computing brought to the table have become so powerful and important, there’s been a need to spread that intelligence closer to those devices that sit at the edge.

The cloud has inherent advantages in enabling big data analysis and data warehousing. Deep learning is made possible because of the cloud technologies. All of these important, impactful tasks require huge amount of data and time to generate meaningful outcomes for future machine learning and decision-making recommendations.  The challenge of the cloud is its inherent latency in data transmission and communications due to network bandwidth and high cost. Edge computing was born out of the need to reduce latency for faster decision making. With the technology advancements in sensors, cameras and storage in recent years, the entire ecosystem from System-in-Chip (SoC), platform, to OEM and app developers is now completed and committed to supporting smarter devices and more compute at the edge.

Fast data generated at the edge enables real-time decision making with context right at the device, which is what makes them more intelligent. Data cached at the edge can be trained to infer and predict behaviors and future events. A quick look at the evolution of smart phones for the last 10 years will give you a very clear picture of how they quickly went from being a passive device for calls, messaging, photos and short videos to a more interactive device with capabilities such as video chat, mobile payment, advanced photography capabilities (including multi-shot, panoramic and 360 pictures and 4K video), VR, AR and much more. We’ve already seen the trend of smartphone being the hub for a smart home where everything is connected.

Video data is a great way to illustrate what’s to come because video traffic is what’s most straining today’s networks. Whether it’s the bandwidth required, sampling rates, or sheer number of sensors, video is driving the direction of future architectures.

To see how these demanding as well as new applications impact storage, we should also pay attention to how video data is captured, stored, analyzed and shared because it will also demonstrate how storage will evolve.  Today’s content delivery networks (CDNs) cache data throughout the network, closer to users, so there is less latency and congestion. At the aggregation network, new product categories are evolving, built around 5G. Access networks, with multiple endpoints, gateways, and aggregation points are going much deeper into the network.

New Workloads, New Opportunities

As augmented reality and virtual reality applications evolve, we ask questions like when is the pixel density high enough? What samples rates are required?  In automotive, video sensors are driving autonomous cars. While humans may not need as many updates, when AI or machine learning is evoked, more frequent samples are required.

In surveillance, applications such as human classification turn to AI to classify all the video data that’s collected. Traditionally in surveillance very little of it was actually used; video footage was only reviewed if there was an incident. However today, using AI to store and classify that data, retail and other establishments can get much more value out of the data if it’s mined.

In automotive, today’s digital displays are evolving into full AR predictive hubs. With 5G there will be strides in connectivity for OTA updates, weather, traffic and entertainment. Assisted driving will evolve into autonomous driving and V2V communication will become a reality.

In all of these cases, there is a need for more storage and compute at the edge.

New Requirements for Data at the Edge

As you build the network out, challenges arise in managing all this data as well as the instrumentation required in IoT and edge computing. This sets the stage for a new set of innovation we’re just starting to see in areas like machine learning. These pose opportunities as well as challenges.

Requirements from the diverse environments (temperatures, vibrations, etc.), workloads, remote maintenance, accessibility and longevity mean there’s not a one size fits all solution. Our customers need a data strategy. Along with the opportunities made possible by the power of data are implied responsibilities, which lead to various regulations and levels of governance.

Data is everywhere. AI is changing how we think about data and addressing data and requirements. New storage and processing architectures are changing requirements. Now more than ever we are at the turning point where we bear a great responsibility in creating, leading and enabling the environments for data to thrive.

Forward-Looking Statements

Certain blog and other posts on this website may contain forward-looking statements, including statements relating to expectations for our product portfolio, the market for our products, product development efforts, and the capacities, capabilities and applications of our products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.

Related Stories

What is the 3-2-1 Backup Strategy?