These days, all networks are becoming computing networks. Also, computing and communications historically have been partial substitutes for each other. Architects can substitute local computing for remote, in other words. Mainframes, onboard, client-server, cloud and edge computing use different mixes of communications and computation resources.
Edge computing, most agree, is among the hottest of computing ideas at the moment, and reduces use of communications capital by putting computing resources closer to edge devices.
But technologists at Samsung believe more distribution of computing chores is possible. They use a new term “split computing” to describe some future state where computing chores are handled partly on a device and partly at some off-device site.
In some cases a sensor might compute partially using a phone. In other cases a device might augment its own internal computing with use of a cloud resource. And in other cases a device or sensor might invoke resources from an edge computing resource.
Conventional distributed computing is based on a client-server model, in which the implementation of each client and server is specific to a given developer, Samsung notes.
To support devices and apps using split computing, an open source split computing platform or standard would be helpful, Samsung says.
With split computing, mobile devices can effectively achieve higher performance even as they extend their battery life, as devices offload heavy computation tasks to computation resources available in the network.
You might agree that the split computing concept is in line with emerging computing and communications fabrics that increasingly operate by using any available resource. Up to this point, that has been seen most vividly in device or app use of Wi-Fi.
In the future we may see more instances of devices using any authorized and available frequency, network, tower or computing resource.
No comments:
Post a Comment