Friday, December 1, 2023

"Back to the Future" as Extended Edge Develops

In some ways, extended edge AI processing on devices--and not at remote cloud computing centers, is a form of “back to the future” technology, as in the days when most data processing happened directly on devices (PCs). 


That suggests substantial movement back to a distributed, decentralized processing environment, in large part, where the cloud era brought a centralized model into being. 


Just as certainly, extended edge will recreate smartphone, smart watch and PC markets, as those devices are outfitted to handle AI directly on board. 


If one takes the current retail market value of smartphones, PCs and smart watches, and then assumes adoption rates between 70 percent and 90 percent by 2033, markets supporting extended edge will be quite substantial, including between a doubling to five-times increase in software, hardware, chip, platform, manufacturing and connectivity increases. 


Market

Market Size in 2023 (USD Billion)

Estimated Market Size with Edge AI in 2033 (USD Billion)

Percentage of AI-Capable Devices in 2033

Smartphones

430

1,999

90%

PCs

250

600

80%

Smartwatches

30

150

70%

Total

710

2,749

80%


Just as PCs made computing power available to anyone with a computer, extended edge AI is making AI capabilities accessible to a wider range of devices and users, right on the device itself.


Extended edge AI also will embed AI operations into everyday objects and environments, enabling a range of new operations requiring immediate response (low latency). 


That will require more powerful processors and more storage, which can be a problem for smartphones and wearable devices with limited resources.


Increased power consumption also will be an issue. And AI models will have to be updated. 


Over-the-air updates, federated learning (where devices train a shared model without exchanging raw data), model compression and quantization, model pruning and knowledge distillation or adaptive learning are tools designers can use to ensure that AI models running on extended edge devices can be updated. 


Model pruning techniques identify and remove redundant or less important connections within the AI model, reducing its complexity without significantly impacting performance. Knowledge distillation involves transferring the knowledge from a large, complex model to a smaller, more efficient model, preserving the original model's capabilities.


Adaptive learning algorithms enable AI models to continuously learn and adapt to changing environments and user behavior.


No comments:

Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...