Friday, July 22, 2016

Telecom Agenda Being Set by "Newcomers," to a Large Extent

Facebook’s efforts to help extend Internet access “to everyone”  now are taking several different and complementary paths. Internet.org is working on app packaging. Its regulatory teams are working to support release of more unlicensed and shared spectrum.

Its Aquila unmanned aerial vehicle program is working on backhaul. Its Teragraph development effort seeks to enable lower-cost access networks in rural areas. Its Terragraph 60-GHz wireless mesh network is designed to enable lower-cost Internet access in dense, urban areas. That has lead to Facebook’s millimeter wave mesh network concept.

Project Aries is working on more spectrally efficient radio access capabilities, so any radio network can deliver more bits, in any given amount of bandwidth.

OpenCellular expects to create open source cellular network technology that can be used by any entity wanting to build a mobile or wireless access network.

The Telecom Infra Project is an effort to create lower-cost, more-efficient telecom access networks, and is modeled on what Facebook did with its open data center efforts.

All of that effort, as well as Google’s efforts, suggest a coming new world where access platforms and networks are created and perhaps operated by any number of new providers, not traditional telecom access providers (cable and telco).

Google acts as an Internet service provider through Google Fiber; as a mobile operator through Google Fi; supplies the world-leading Android mobile operating system; has created its reference platform Nexus line of devices; is working on unmanned aerial vehicles and balloons for backhaul and access; has deployed Wi-Fi hotspot networks and has--many say--set the Internet agenda in the United States and Europe.


What if Social Media Were Your Neighbors, What if Operating Systems were Airlines?

After two decades, I still find if operating systems were airlines hilarious. More operating systems are compared in this version. Here’s a slightly more updated version.  

Nobody has tried to do something similar for cloud-era processes, though what if social media were your neighbors has some of that flavor.  


Google Applies Artificial Intelligence to Cut Data Center Power for Cooling Up to 40%

It is not yet clear how, when and how much machine learning and other forms of artificial intelligence will start to reshape the way customers buy and use communication services. For the moment, AI likely will make its mark on non-customer-facing processes.

Google’s data centers, for example, should soon be able to reduce energy consumption cooling by up to 40 percent by applying machine learning.

Servers generate lots of heat, so cooling drives power consumption requirements. But peak heat dissipation requirements are highly dynamic, complex and non-linear, Google Deepmind says.

The machine learning system was able to consistently achieve a 40 percent reduction in the amount of energy used for cooling, which equates to a 15 percent reduction in overall power usage effectiveness.

“Because the algorithm is a general-purpose framework to understand complex dynamics, we plan to apply this to other challenges in the data centre environment and beyond in the coming months,” said Rich Evans, DeepMind research engineer, and  Jim Gao, Google data center engineer.

Probably few would debate the potential use of machine learning and artificial intelligence to improve industrial processes.

Sales processes, though, likely are not an area where most would expect big changes. Products sold to business customers and larger organizations generally are considered complex matters, requiring customization.

Enterprise communications requirements are more complicated than data center power consumption processes, many could argue. But are they? Google and Deepmind applied historical data and then AI on top, to develop new rules for managing a complex system.

In essence, do sales and engineering personnel not have an accumulated wisdom about the general problems, existing processes and typical software and hardware used by enterprise customers, to a relatively high degree?

And where the typical solution involves recommendations for removing, adding or altering services and features to solve enterprise communication problems, are there not patterns designers and sales personnel can rely upon?

If so, might it not be possible to radically simplify the process of understanding and then “quoting” a solution? And if this cannot be done on a fully-automated basis, might it still be done on a wide enough scale to deliver business value for a communications supplier?

In other words, could AI simplify substantial parts of the enterprise solutions business? Most who do such things for a living might argue the answer is “no.” But are enterprise solutions completely unique? Are there not functional algorithms engineers and network architects work with that are, in fact, bounded in terms of potential solutions?

And, if so, could not large amounts of the analysis, design and reconfiguration not be done using AI? Airline reservation systems were, and are, quite complex. And yet consumers now use tools built on those systems to buy their own tickets.

Business communication solutions are complex. But they are not unmanageably complex. People can, and do, create solutions based on the use of what effectively are algorithms. We might call it experience or skill. That it is. But it is based on rules, formal or informal.

Rules-based systems can be modeled. And that could have huge implications for how business communications solutions are designed, provisioned and sold.

Thursday, July 21, 2016

Google Fiber Hits Pause in Portland Market

Google Fiber has put its plans to build a fiber access network in Portland in the fall of 2016 on at least temporary hold. “Why” is the question everyone should be asking.

In principle, Google Fiber could be wrestling with the business model, as competitors Comcast and CenturyLink have themselves been upgrading their own networks in the Portland area. That might make for a more-difficult payback model.

CenturyLink, for example, already is gigabit services in Portland, as does Comcast. That arguably makes harder the business model for Google Fiber or any other ISP that might be contemplating entering the Portland market.

It will not be so easy to attack as the only provider of 1 Gbps service if the other two leading ISPs already are doing so.  

"We're continuing to explore the possibility of bringing Google Fiber to Portland and other potential cities," Google says. "This means deploying the latest technologies in alignment with our product roadmap, while understanding local requirements and challenges, which takes time."

Also, it might be reasonable to assume that Google Fiber is about to try and become “Google Internet” and use fixed wireless as the access platform, not optical fiber.

That would be a major development. Facebook, AT&T and Verizon are other entities expected to promote or use fixed wireless as a major access platform for Internet access, and eventually gigabit access services.

Personally, I think Google Fiber has looked at the numbers and concluded a gigabit network that might cost $300 million is simply too big a risk in the existing Portland market, compared to its potential prospects several years ago: before CenturyLink and Comcast moved to upgrade.

This could be the leading edge of a very big change in access strategy and business models.

No Demand for Fractional T-1?

AT&T has asked the Federal Communications Commission for permission to stop selling fractional T-1 services that have very little demand in e in Arkansas, California, Illinois, Indiana, Kansas, Michigan, Missouri, Nevada, Ohio, Oklahoma, Texas and Wisconsin.

In fact, says AT&T, the company “has no customers subscribing to this service in Arkansas, California, Kansas, Missouri, Nevada, Oklahoma, and Texas.”

Once upon a time, a fractional T-1 (128 kbps, 256 kbps, 384 kbps, 512 kbps or 768 kbps) service was an affordable alternative to purchase of full T-1 services. In the 1990s, some of you might even have purchased a fractional T-1 service (consumer or business).

These days, even if some legacy applications remain, you would be hard pressed to point to any widely-used or mission-critical service that depends on fractional T-1, and fewer and fewer applications for full T-1 services as well.

As AT&T points out, people and businesses simply do not buy fractional T-1 anymore.

Dish Network 2Q: Revenue and Subscriber Losses

Dish Network reported second-quarter 2016 earnings that topped expectations, but Dish Network also had a net loss of 281,000 pay-TV subscribers, including satellite and the Sling web TV service, said to be the biggest quarterly subscriber loss ever, for Dish.

DirecTV, owned by AT&T, seems to have added customer accounts in the second quarter.


Here’s the importance: every legacy service provider is in a race to create new revenue streams at least as fast as each service provider loses legacy accounts. Pressure on top-line revenue and customer account attrition might mean Dish Network is losing that battle, despite the launch of Sling TV streaming services.


That leaves speculation about Dish Network entering the mobile business.


Opinions about what Dish Network might be able to do with its amassed mobile spectrum have varied. Some seem never to have believed Dish Network really would become a mobile service provider, and eventually would simply sell its spectrum.


Others believed Dish Network might well try and enter the mobile business.


The “problem” for observers is that much hinges on whether Dish Network concludes it is time to sell, time to build to create value before selling, or time to transition to a new business model and grow over the long term.

It is not clear anybody outside Dish Network, and aside from Charlie Ergen, Dish CEO, have any idea what the company will do.

Verizon Enterprise Solutions Launches "Virtual Network" for Enterprises

Verizon Enterprise Solutions is launching Virtual Network Services, allowing enterprises access to what we have long called “bandwidth on demand” features. Verizon Enterprise solutions calls it a “virtual infrastructure model.”

Verizon says it will offer three models for deploying virtualized services including: premises-based universal customer premises equipment (CPE), cloud-based virtual CPE services (available fall 2016) and hybrid services where clients can mix premises-based and cloud-based deployment.

Service providers have wanted such capabilities since the 1980s, generally referring to the concept as bandwidth on demand, and generally believing it would happen first for business customers, especially enterprises preprovisioned with optical access.

These days those concepts are more likely to be known as “virtual infrastructure” or “virtual networks.”

Verizon’s initial Virtual Network Service packages are: Security, WAN Optimization, and SD WAN services, and include:
Verizon’s initial Virtual Network Service packages
Verizon’s new services can be delivered across public, private and wireless networks from Verizon or other service providers, or a combination of multiple providers across multiple networks.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...