Industry analysts and data center operators--also including enterprise customers--seemingly continue to evaluate the costs of adding edge computing capabilities to their existing data center architectures, resulting in arguably-slower market growth than some had predicted and expected.
And now there is the additional consideration of how support for artificial intelligence operations “as a service” could affect edge computing as well. And it might be fair to say that, at the moment, AI requirements might not boost prospects for edge computing all that much.
In fact, one might make an argument for a barbell type growth pattern, with AI increasing demand for remote processing on one hand and “on-the-device” on the other hand. Remote processing might still be the best option for heavy processing and storage requirements, while many latency-dependent operations will be increasingly handled right on the device.
The arguments around edge computing “as a service” have always focused on use cases requiring very-low latency, with additional value from less capacity network demand and possibly some privacy and security advantages. For enterprise computing in general, processing power has been a concern when evaluating private and cloud computing alternatives.
But cost has been a growing issue as “cloud computing as a service” volume and scale have grown.
But on-the-device and on-the-premises private processing alternatives always are a possibility. And remote processing always is an alternative for non-real-time use cases and needs.
So the additional new requirements for AI “as a service” still hinge on latency and processing power requirements.
Some might attribute the present state of thinking around edge computing as more focused on cost-benefit and payback than hype, with less of the “either/or” rhetoric around edge computing as a “replacement” for remote cloud computing some espoused early on.
Forecasts of revenue growth have varied significantly, looking at service revenue, capital investment, endpoints supported or processing capacity.
There's a strong possibility that AI processing will follow a barbell pattern, with the majority of processing happening on devices or at remote data centers, leaving edge processing with a smaller role. Here's a breakdown of the reasoning:
On-device processing will be preferred or necessary for real-time use cases, including facial recognition; augmented reality; image processing and speech interfaces. Privacy issues also are addressed when processing happens on the device and not at a remote data center of some type.
Remote computing will still make sense when processing power and storage are key requirements, but not real-time response.
Also, centralized management and control will have advantages for processing-intensive or data-intensive operations, or in instances where collaboration with many other entities is required.
So one might postulate that edge computing will have value in between the on-device and remote cloud processing ends of the barbell.
Edge devices (sensors, network gateways) often have limited processing power and battery life, which restricts their ability to handle complex AI tasks, for example, but might also have latency requirements.
Also, in some instances, connectivity impediments or cost might make local processing attractive.
So AI processing might follow a barbell pattern, prioritizing on-device processing for applications requiring low latency and prioritizing user privacy. Remote data center processing will be essential for training complex AI models, managing large datasets, and facilitating collaboration.
Edge computing has to provide value someplace in-between. And that is part of the business value issue. As often is the case, products have clear value at the high-end and low-end parts of any market. In the middle is where balances have to be struck.