Tuesday, January 28, 2020

Are Wi-Fi Routers Dangerous to Your Health?

As long as I can remember, there have been periodic and generally low level concerns about non-ionizing radiation--the type of energy radio signals represent. By non-ionizing, we mean that the signals are not capable of dislodging electrons atoms or molecules, as are x-rays or gamma rays. Ionizing radiation, in high doses, is carcinogenic, though useful in low doses. 

Non-ionizing radiation can cause tissue heating, as you can experience with food in a microwave oven. The health concerns about non-ionizing radiation come from the potential long term exposure. As with any form of natural radiation (sunlight, for example), the key is exposure levels. 

The key thing about non-ionizing radiation is that it is found, in real-world communications cases, at very low power levels. Also, signals decay logarithmically. 

This is an illustration of how a Wi-Fi router’s power levels drop rapidly with distance. Power levels drop more than half in the first two meters. Once people are about four meters from the router, signal levels have dropped from milliWatts to microWatts, about an order of magnitude (10 times). 

Some people are concerned about power emitted from mobile cell towers. Keep in mind that mobile radios on cell towers have power levels that decay just as do Wi-Fi signals. Some liken the power levels of a mobile radio on a tower to that of a light bulb

Radio signals weaken (attenuate) logarithmically, by powers of 10, so the power levels decay quite rapidly.

Basically, doubling the distance of a receiver from a transmitter means that the strength of the signal at that new location is 50 percent  of its previous value. Just three meters from the antenna, a cell tower radio’s power density has dropped by an order of magnitude (10 times).

At 10 meters--perhaps to the base of the tower, power density is down two orders of magnitude. At 500 meters, a distance a human is using the signals, power density has dropped six orders of magnitude.


Though there is no scientific evidence that such low levels of non-ionizing radiation actually have health effects, such as causing cancer, a prudent human will limit the amount of exposure, just as one takes the prudent risk of wearing a seat belt in an automobile, minimizing time spent in the sun and so forth.

Would It Have Made a Difference If Telcos Had Stuck with ATM?

It is no longer a question, but there was a time not so long ago when global telcos argued for asynchronous transfer mode (broadband ISDN) as the next-generation protocol, rather than TCP/IP.  

A report issued by Ovum in 1999 argued that “telecommunications providers are expected to reinvent themselves as operators of packet-switched communications networks by 2005.”
“Growth in Internet Protocol (IP) services is expected to fuel the transition,” Ovum argued.

Of course, all that was before the internet basically recreated the business, making connectivity providers a part of the broader ecosystem of applications, services and devices requiring internet connectivity to function. 

In retrospect, it might seem obvious that the shift of all media types (voice, video, image, text) to digital formats would make TCP/IP a rational choice. But that was not the case in the telecommunications industry from the 1980s to the first decade of the 21st century. Telco engineers argued that ATM was the better choice to handle all media types. 

But the internet, cheap bandwidth and cheap computing all were key forces changing the economics and desirability of IP, compared to ATM. 


Once internet apps became mass market activities, network priorities shifted from “voice optimized” to “data optimized.”

Connection-oriented protocols historically were favored by wide area network managers, while connectionless protocols were favored by local area network managers. The relative cost of bandwidth drove much of the decision making.

WAN bandwidth was relatively expensive, LAN bandwidth was not. That meant the overhead associated with connectionless protocols such as TCP/IP did not matter. On WANs, packet overhead mattered more, so lower header overhead was an advantage. 

It is by no means clear that the choice of a connectionless transmission system, instead of the connection-oriented ATM, would have changed the strategic position of the connectivity provider part of the internet ecosystem. Indeed, one key argument for IP was simply cost: IP devices and network elements were much cheaper than ATM-capable devices. 

One might argue the global telecom industry simply had no choice but to go with IP, no matter what its historic preferences might have been.

Monday, January 27, 2020

Applications and Use Cases are Big 6G Challenge

The whole point of any access network is send and receive user and device data as quickly as possible, as affordbly as possible, to the core network and all the computing resources attached to the core network. The future 6G network, no less than the new 5G network, is likely to feature advancements of that type.

Bandwidth will be higher, network ability to support unlimited numbers of devices and sensors will be greater, latency will be even lower and the distance between edge devices and users and computing resources will shrink.

The biggest unknowns are use cases, applications and revenue models, as has been true since 3G. The best analogy is gigabit fixed network internet access. Users often can buy service running at speeds up to a gigabit per second. Few customers presently have use cases requiring far lower speeds.

So it is likely that 6G, as will 5G, often will feature capabilities that exceed consumer use cases, initially.

NTT Docomo has released a white paper with an initial vision of what 6G will entail. Since every mobile generation since 2G has increased speeds and lowered latency, while connection density grew dramatically between 4G and 5G, we might look first to those metrics for change.


Docomo suggests peak data rates of 100 Gbps, latency under one millisecond and device connection density of 10 million devices in each square kilometer would be design goals. 


Along with continued progress on the coverage dimension, 6G standards might extend to space, sky and sea communications as well. Docomo also believes quality of service mechanisms exceeding “five nines” and device performance (no charging devices, cheaper devices) would be parts of the standard. 




Looking at commercial, economic or social impact, since the 3G era we have tended to see a lag of execution compared to expectations. In other words, many proposed 3G use cases did not emerge until 4G. Some might say a few key 4G use cases will not flourish until 5G is well underway.


For that reason, we might also discover that many proposed 5G innovations will not actually become typical until the 6G era. Autonomous vehicles are likely to provide an example. 


So Docomo focuses on 6G outcomes instead of network performance metrics. Docomo talks about “solving social problems” as much as “every place on the ground, sky, and sea” having  communications capability. Likewise, 6G might be expected to support the cyber-physical dimensions of experience. 


Also, 5G is the first mobile platform to include key support for machine-to-machine communication, instead of primarily focusing on ways to improve communication between humans. Docomo believes 6G will deepen that trend. 




It is worth noting that the 5G spec for the air interface entails availability higher than the traditional telecom standard of “five nines” (availability of 99.999 percent). 5G networks are designed to run at “six nines.” So 6G might well run at up to “seven nines” (99.99999 percent availability). 


The legacy telecom standard of five nines meant outages or service unavailability of 5.26 minutes a year. The 5G standard equates to less than 32 seconds of network unavailability each year. A seven nines standard means 3.16 seconds of unavailability each year. 




Some might say 4G was the first of the digital era platforms to design in support for internet of things (machines and computers talking to machines and computers) instead of the more traditional human mobile phone user. That trend is likely to be extended in the 6G era, with more design support for applications and use cases, with artificial intelligence support being a key design goal as well. 


In part, that shift to applications and use cases is more important as the wringing of traditional performance out of the network becomes less critical than new use cases taking advantage of performance boosts. 


As it already is the case that almost no consumer users actually “need” gigabit speeds, much less speeds in the hundreds of megabits per second, so few human users or sensors will actually “need” the 6G levels of throughput and latency. 


Architecturally, the evolution towards smaller cells will continue, in part to support millimeter wave frequencies, in part to assure better connectivity. Where traditional cell architectures have emphasized non-overlapping coverage, 6G networks might use orthogonally non-aligned cells (overlapping), deliberately overlapping to assure connectivity. 


That almost certainly will require more development of low-cost beam forming and signal path control. Having cheap artificial intelligence is going to help, one might suggest.

Friday, January 24, 2020

Like it or Not, Dumb Pipe is the Fixed Network's Foundation

Perhaps you see the irony of cable TV executives saying their businesses, once founded on selling applications, now see the killer use case as internet access, a textbook example of “dumb pipe.” 

“We’ve made a pivot to a broadband-centric cable company,” said Comcast CEO Brian Roberts. That is another way of saying the strategic product for a multi-product company is a classic “dumb pipe” service that is the prerequisite for using all TCP/IP-based apps and services deliverable by the public internet. 

If you ask virtually any telecom executive selling services to consumers (business to business is a different matter), you are likely to hear executives say they are not just “dumb pipes,” in the sense of providing a low-value, commodity-priced product. Rather, you will hear any number of arguments that the full range of products provides higher value, differentiated products at a range of prices and profit margins, sometimes to more-attractive customer bases and geographies. 

What many cable TV executives now argue is that their network services are based precisely on dumb pipe internet access, and not the traditional video subscriptions that historically drove value, revenue and profits. 

That is but one example of how the internet, and the separation of applications from networks, has revolutionized the communications and computing businesses. 

Until the internet era, all consumer mass market services were “apps,” not “dumb pipe.” Consumers bought the app called “dial tone,” the ability to place a phone call, not directly the use of a wire enabling the use of dial tone. Customers bought cable TV not to use a coaxial cable or modulated radio frequency signals, but to watch video. People bought mobile phones and used mobile services to send and receive text messages, not to use a data channel. 


In the internet era, the “value” of fixed network voice, mobile voice, mobile messaging and linear TV subscriptions has dropped. The value of access to the internet (a dumb pipe service) has grown, in both mobile and fixed domains. 

So consumer preferences--and the revenue earned--has changed. Though the value of mobile communications “anywhere” remains, the value of some apps (voice, messaging, linear video) has declined. 

Or, put another way, dumb pipe has grown more important, while traditional apps have become less important and valuable. 

This can be seen clearly in the shifts of service provider revenue in the U.S. market from long distance--the former revenue and profit generator--to mobile service. What one sees is a 50-percent toll revenue drop over a decade, and its replacement by a new lead service, mobility. 


But the mobile network uses rival facilities. The fixed network business model now hinges on dumb pipe internet access. Other services and apps are important: how many service providers would willingly surrender their voice or video revenues? 

But new questions must be asked. To the extent the fixed network must rely increasingly on one service--internet access--how does the business model change, and how? Can the full value of capital investments be recouped solely or primarily from internet access?

If not, what can be done to find replacements for lost voice and video revenue? And how soon will most fixed network executives become comfortable saying their business models are built on dumb pipe? For how many service providers will this prove true?

Under the Best of Circumstances, How Much Economic Growth Can High Quality Communications and Computing Produce?

A government official from one of the South Pacific islands asked me recently why there frequently seems to be so little discussion of the role 5G can play in promoting economic development in the South Pacific. 

There are a couple of practical reasons. Economic activity, almost always, hinges on population. Most economic activity occurs where there are substantial numbers of people. High rates of economic growth require other inputs as well, but population mass is the foundation, since perhaps 70 percent of all economic activity is generated by consumer spending.

So policy makers confront the fact that total population in the South Pacific is small, perhaps 2.3 million people, scattered across 10 million square miles

In other words, as a practical matter, ask yourself whether the absolute best communication facilities--fast internet, low retail costs, ubiquitous terrestrial coverage, big modern data centers, 100-percent fast mobile internet coverage--can make a big difference in terms of spurring economic development, in areas of low population, in areas remote from population centers.

Producers and suppliers go where the people are, fundamentally. So economic activity tracks population. On the scattered South Pacific islands, gross domestic product can be quite low, by global standards. 

With the exception of Australia, New Zealand and Papua New Guinea, GDP on any single island is quite small, orders of magnitude smaller than on the two bigger islands and the continent of Australia. GDP on a global scale also is quite small. 

That being the case, even 100-percent adoption of any technology in the South Pacific does not move the global needle. Conversely, what happens in India and China, right now, drives both growth of fixed and mobile internet access globally. 

At a more granular level, and ignoring contribution to global output, assume that there are no gaps whatsoever in small South Pacific island technology supply or take rates, and that supply is equal to that found among the top 20 countries globally.

We all commonly believe that broadband causes economic development, and that, to the contrary, its lack retards economic growth. Let us be clear, the two are correlated. What nobody can prove is the thesis that better broadband “causes” faster economic growth or more growth. But we all behave as though this were the case.

But it might not be true. It is entirely possible that strong economic growth itself creates the demand for better computing and communications assets and deployment. 

In other words, wealthy consumers in areas with high job growth and economic growth demand--and can afford--better internet. That, in turn, creates the supply. 

In rural U.S. or any other markets, we might note that the business case for more investment is sharply limited, precisely because the pool of customers is sharply limited.   

What could change? How much more economic output is possible? Economists always point out that consumer activity accounts for 70 percent of gross domestic product. So people matter,  and that is the ultimate point. Even supplied with the absolute best computing and communications resources, the South Pacific islands are too thinly populated and too remote from other population centers to become much-bigger platforms for economic activity. 

How much more job creation, retail spending, use of edge computing, warehouse siting, transportation facilities, factories or business activity is possible, even with the absolute best computing and communications facilities being in place?

In other words, as much as policymakers should always strive to make high-quality communications available in rural and remote areas, the actual potential economic upside is probably sharply limited. The social and educational benefits are another matter.

Still, we generally overestimate the effect high-quality computing and communications can have, where it comes to economic development, in lightly-populated areas remote from large population centers.

Thursday, January 23, 2020

Most Computing Now Requires Communication

Communications have been important for some enterprise data communications since the days of mainframes, but computing now fundamentally relies on communications, making the old phrases "computing and communications" or "information and communications technology" a functional reality underpinning nearly all computing instances and workloads.

In early mainframe days, only a relatively few businesses required communications support for their computing needs. But communications gradually has become a foundational requirement for computing.

Telcos in the 1950s began using their own networks for internal purposes, eventually creating the T1 standard for data communications in 1958.

Communications arguably became more important for computing with the development of ARPANET in the 1960s, with the emergence of Transmission Control Protocol/Internet Protocol in the 1970s, and in the 1980s the ability to use dial-up telephone networks to support ARPANET.

Communications became even more important in the 1990s with the invention of the World Wide Web and the emergence in the 2000s of the Internet and WWW as staples of consumer experience.

The smartphone and the web illustrate the centrality of communications for computing, as remote computing requires communications. A similar observation can be made about business and enterprise computing, which now also relies on remote server access and public and private cloud computing.

So it should come as no surprise that public cloud spend is quickly becoming a significant new line item in information technology budgets, especially among larger companies, a survey sponsored by RightScale suggests. 


Among all respondents, 23 percent spend at least $2.4 million annually ($200,000 per month) on public cloud while 33 percent are spending at least $1.2 million per year ($100,000 per month). 


Among enterprises the spend is even higher, with 38 percent exceeding $2.4 million per year and half (50 percent) above $1.2 million per year. 


Small and medium businesses generally have fewer workloads overall and, as a result, smaller cloud bills (just over half spend under $120,000 per year). However, 11 percent of SMBs still exceed $1.2 million in annual spend, RightScale says. 


Enterprise respondents run 79 percent of workloads in cloud, with 38 percent of workloads in public cloud and 41 percent in private cloud. Workloads running in private cloud may include workloads running in existing virtualized environments or bare-metal environments that have been “cloudified,” says RightScale. 


Non-cloud computing comprises about 21 percent of respondent workloads. 


Small and mid-sized businesses report running 43 percent of workloads using public cloud and also run 35 percent of workloads on private cloud. Some 22 percent of workloads remains on non-cloud platforms. 


source: RightScale

Wednesday, January 22, 2020

Western Union Partners with Airtel for Mobile Payments

Western Union is partnering with Bharti Airtel to launch real-time mobile payments in India and across 14 countries in Africa, using Airtel Payments Bank and Airtel Africa.

Airtel Payments Bank customers will soon be able to direct a Western Union money transfer into their bank accounts 24 hours a day, seven days a week. Global senders can use Western Union’s digital services in 75 countries plus territories, or the walk-in Agent network across more than 200 countries and territories.

The collaboration with Airtel Africa will enable more than 15 million Airtel Money mobile wallet users in Nigeria, Uganda, Gabon, Tanzania, Zambia, DRC, Malawi, Madagascar, Kenya, Congo, Niger, Tchad, Rwanda and Seychelles to simply route any money transfer received from across the world into their wallets. 

It will also allow senders around the world to push funds directly to an Airtel Money mobile wallet in real-time and store value or pay for goods and services. Service launch is expected in 2020.

India is the world’s largest remittance-receiving country, according to the World Bank.

Alphabet Sees Significant AI Revenue Boost in Search and Google Cloud

Google CEO Sundar Pichai said its investment in AI is paying off in two ways: fueling search engagement and spurring cloud computing revenu...