Cloud computing requries data centers. Data centers require communications. Communications requires computing and data centers. That means each industry potentially can assume additional roles inm in the other parts of the ecosystem. That, in turn, perennially spurs hopes that connectivity providers can operate in the cloud computing or data center roles.
So Orange now talks about a strategy that builds on connectivity but adds other digital roles beyond connectivity. Other "telcos" have similar hopes to shift from being "telcos" to becoming "techcos." It never is completely clear what that means, but the implication always is that new lines of business and revenue are the result.
Hopes aside, connectivity services now have become an essential foundation for cloud computing and data center value.
Some adaptations of data center architecture in the cloud computing era have to do with connectivity architecture more than the way servers are cooled, the types of processors and specialized chips used in servers or the way servers are clustered.
A cluster is a large unit of deployment, involving hundreds of server cabinets with top of rack (TOR) switches aggregated on a set of cluster switches. Meta has rearchitected its switch and server network into “pods,” in an effort to recast the entire data center as one fabric.
The point is to create a modularized compute fabric able to function with lower-cost switches. That, in turn, means the fabric is not limited by the port density of the switches. It also enables lower-cost deployment as efficient standard units are the building blocks.
That also leads to simpler management tasks and operational supervision as well.
And though much of that architecture refers to “north-south” communications within any single data center, the value of the north-south architecture also hinges on the robustness of the east-west connections between data centers and the rest of the internet ecosystem.
In fact, it is virtually impossible to describe a data center architecture without reference to wide area and local area connectivity, using either the older three-tier or newer spine-leaf models. The point is that data centers require connectivity as a fundamental part of the computing architecture.
That will likely be amplified as data centers move to support machine learning operations as well as higher-order artificial intelligence operations.
Still, in the cloud computing era, no data center has much value unless it is connected by high-bandwidth optical fiber links to other data centers and internet access and transport networks.
“From a connectivity perspective, these networks are heavily meshed fiber infrastructures to ensure that no one server is more than two network hops from each other,” says Corning.
The other change is a shift in importance of north-south (inside the data center) and east-west (across the cloud) connections and data movement. As important as north-south intra-data-center traffic remains, communications across wide area networks assumes new importance in the AI era.
The traditional core-spine-leaf connectivity architecture between switches and centers increasingly looks to be replaced by a two-layer spine-leaf design that reduces latency. In other words, in addition to what happens inside the data center, there will be changes outside the data center, in the connectivity network design.
In the above illustration, the Entrance Room (ER) is the entrance facility to the data center.
The Main Distribution Area (MDA) holds equipment such as routers, LAN and SAN switches. The Zone Distribution Area (ZDA) is a consolidation point for all the data center network cabling and switches.
The Equipment Distribution Area (EDA) is the main server area where the racks and cabinets are located.
So even if connectivity remains a separate business from cloud computing and the data center business. But all are part of a single ecosystem these days. Cloud computing requires data centers and data centers require good connectivity.
No comments:
Post a Comment