Showing posts sorted by relevance for query innovation takes 20. Sort by date Show all posts
Showing posts sorted by relevance for query innovation takes 20. Sort by date Show all posts

Sunday, December 30, 2018

Is It the "Year of X"?

It’s that time of year when some feel compelled to prognosticate on “what will happen next year,” while others remind us of what did happen “last year.” And there always are a brave few who will try to capture the essence in a single phrase: “the year of X,” whatever X is said to be.

At a high level, we might well look back at such highly-distilled “year of X” predictions and note that it almost never happens. “The year of X,” whatever X is said to be, nearly always occurs (in the sense of commercial adoption or inflection point of adoption) some future year.

My simple way of describing this situation is to say that “whatever is said to be the ‘year of X’ trend almost ensures it will not be.” Of course, some will argue that is not what they mean.

Instead, they tend to mean this is the year some trend is popularized or discovered. Okay, in that sense, there is firmer--yet still tenuous--ground to stand on. Rarely does a big new thing just burst on the scene, in terms of public awareness, in a decisively-new way,

What does happen is that some arbiter “proclaims” that this has happened. It’s arbitrary.

The point is that any truly-significant new technology, platform or commercial activity takes quite some time to reach commercialization, and typically quite long after all the hype has been crushed by disillusionment.


The point is that even highly-successful new technologies can take decades to reach commercial ubiquity, even if today’s software-driven products are adopted faster than innovations of the past.

It still can take a decade for widespread consumer use of any product or service to reach 50 percent to 60 percent adoption.


Also, recall that most new products, and most new companies fail: they simply never succeed as commercial realities. Also, we sometimes overestimate the actual time any innovation takes to reach 10 percent or some other level of adoption on a mass level.

There is debate about how fast smartphones were adopted for example. Was it seven years or something greater than a decade for usage to reach half of consumers? Some estimate it took just seven years. Others have argued adoption never reached 50 percent after a decade.

And depending on how one defines “smartphone,” adoption levels of 50 percent took a couple of decades to nearly three decades.



For all such reasons, some of us tend to discount the notion of a “year of X.” Truly-significant innovations which achieve mass usage often take longer than expected to reach mass adoption levels. On the other hand, there arguably are points in time when public awareness seems to reach something like an inflection point.

In most cases it is difficult to measure the actual year when a shift becomes significant. Is it the point where 10 percent of people recognize a term, or say it is important? Or when 20 percent, 30 percent or 40 percent say so?

More significantly, at what point of innovation purchase or regular usage has something “arrived,” in a commercial sense?

Thursday, August 6, 2020

Advanced Technology Takes Longer Than You Think to Become Mainstream

Advanced technology often does not get adopted as rapidly as the hype would have you believe. In fact, most useful advanced technologies tend not to go mainstream until adoption reaches about 10 percent. That is where the inflection point tends to occur. That essentially represents adoption by innovators and early adopters. 

source: LikeFolio


One often sees charts that suggest popular and important technology innovations are adopted quite quickly. That is almost always an exaggeration. The issue is where to start the clock running: at the point of invention or at the point of commercial introduction? Starting from invention, adoption takes quite some time to reach 10 percent adoption, even if it later seems as though it happened faster. 

source: Researchgate


Consider mobile phone use. On a global basis, it took more than 20 years for usage to reach close to 10 percent of people. 

source: Quora


That is worth keeping in mind when thinking about, or trying to predict, advanced technology adoption. It usually takes longer than one believes for any important and useful innovation to reach 10-percent adoption


source: MIT Technology Review


That is why some might argue 5G will hit an inflection point when about 10 percent of customers in any market have adopted it.

Thursday, March 15, 2012

Sometimes, Not Matching Competitor Offers is the Wise Strategy

France Telecom says it will not match the low-cost mobile offers recently launched by Iliad because such aggressive pricing would be bad for network quality and innovation in the long-run, says France Telecom CEO Stephane Richard. That Orange won't compete on price might strike you as unwise.

Goldman Sachs, for example, forecasts that Iliad's market entry will cause France Telecom to lose a third of its operating profits in its domestic market by 2015.

But there are ample precedents for France Telecom to do so. Beyond higher marketing costs as competition escalates, sometimes all an incumbent can do is harvest a business. That, in fact, was AT&T’s strategy when it was a dominant long distance provider facing growing competition from a growing number of competitors, and as prices for its product continually declined.

A similar strategy has been taken by incumbent telephone companies in the face of growing competition from VoIP providers. You might argue that telcos should have jumped into VoIP aggressively, matching competitor lower prices.

They generally haven’t done that. The reason is that incumbents lose more than they gain by matching lower prices, even when everyone would agree lost market share is the inevitable result.

For an incumbent telco, matching lower competitor prices implies lower retail prices across the board, for the entire customer base, not just for the consumers buying the VoIP service. A rational telco executive would do better to preserve gross revenue and profit margin on a gradually-shrinking base of customers, rather than adopt across the board lower prices in an effort to slow the market share losses.

"The real risk is that all the operators become 'low-cost', meaning less investment, fewer services and jobs," said Richard.

Iliad, which markets its services under the name Free, touched off a price war on January 10, 2012  with an offer of unlimited calls to France and most of Europe and the United States, unlimited texts, and 3 gigabytes of mobile data for 19.99 euros ($25.83) per month, without a contract.

France Telecom and Vivendi reacted by cutting some mobile prices but only on the offers sold without phone subsidies and contracts.

Some analysts predict that France Telecom, Vivendi and Bouygues will all become structurally less profitable as Iliad takes market share in the coming years.

But that has happened before, in the telecom business. Firms as large as AT&T was, or MCI, watched profits gradually decline, to the point that both were purchased by other providers in the market.

Right now, local telcos are essentially harvesting their legacy voice business, essentially “allowing” VoIP competitors to take market share. That is a rational strategy, especially in the consumer segment of the business.

The point is that there are times when an incumbent simply cannot match prices, and has to prepare to lose market share. That might be a bigger issue for lots of mobile service providers, soon.

There is growing evidence that the high-margin mobile text messaging market is past its peak.

Danish SMS traffic, for example, decreased by over 20 percent in the first six months of 2011, according to Strand Consult, and the trend will continue in 2012.

Text messaging revenue is not declining in all markets, but is slowing in most developed markets. The most-recent data from the CTIA suggests slowing growth in the U.S. text messaging market of about nine percent.

In the Danish market,  three out of four mobile operators have been experiencing a steady decrease in their test messaging (short message service, or SMS) traffic month after month.

From 2010 to 2011, TDC experienced an SMS traffic drop of 17 percent, Telia lost 18 percent and Telenor 26 percent, while the fourth operator 3 was the only operator that had growth in their SMS traffic.

That 3 saw text messaging growth is largely attributable to the fact that 3 is gaining customers and share in the market. SMS traffic on the 3 network grew by 29 percent.

But, overall, the number of Danish SMS messages fell during the first half of 2010 to 6.4 billion and to 6.2 billion during the first half of  2011. That is a drop of about seven percent from 2010 to 2011.

Facebook messaging is the reason for the drop, Strand Consult argues..

So what are Danish operators doing? They are bundling mobile broadband with SMS and MMS packages as part of a smart phone purchase. That means service providers get paid even as the volume of text messages declines.

Finland's largest carrier, Sonera, for example, recorded a 22 percent decline in texting on Christmas Eve in 2011, versus the same night in 2010.

It isn't that people are communicating less. They are just using different methods of communicating. Text Messaging Declines  

Hong Kong also apparently saw a similar decrease on Christmas, dropping 14% from the same day in 2010. Netherlands service provider KPN provided an early warning when it announced significant declines in messaging volume earlier in 2010. KPN text message declines

Dutch telecoms regulator, OPTA, which shows a significant decline in the number of SMS sent in the Netherlands in first half of  2011 compared to the previous six-month period.

The country's largest operator, KPN, has also reported declining year-on-year messaging volumes over the last few quarters due to what it calls "changing customer behavior."

Wireless Intelligence says text messaging volumes are falling in France, Ireland, Spain and Portugal as well.

According to OPTA, the total number of SMS sent in the Netherlands stood at 5.7 billion for the first six months of the year, down 2.5 percent from 5.9 billion in the second half of  2010, even though total text messaging revenue rose slightly (0.6 percent) to EUR378 million during the period.

That should not come as a surprise. The number of over the top and social messaging alternatives has been growing for years. But there is a "network effect" for messaging, as there is for any other communications tool. Until a user is fairly sure that nearly everybody he or she wants to communicate with can be reached by a particular tool, adoption is slower.

But there always is a tipping point, where the expectation changes from "I doubt this person uses this tool" to "there is a good chance they use this tool." Finally, there is the point of ubiquity, when the assumption simply is that "everybody" uses the tool.

Also, the history of text messaging and email are instructive. Though most cannot remember a time when it was so, email and messaging services once upon a time were not federated. In other words, you could not send messages across domains.

History also tells us what happens after federation: usage explodes. With alternative messaging platforms, we still are not in a "full federation" mode, where anybody can send messages to any other user, irrespective of what device, operating system, service provider or application they prefer to use. That day will come, though, and text messaging usage and revenues will suffer.

The.maturing market illustrates a key element of business strategy.

A rational service provider strategy, when confronted by such challenges, might simply be to harvest existing revenue streams, using bundling and other approaches to maintain as much revenue as possible in legacy lines of business, while investing in the next generation of services.

As CenturyLink halted Qwest’s old VoIP business, to emphasize sales of legacy voice services, sometimes the wisest course is not to embrace disruptive services, but “cope with them,” while growing services and revenues in other areas. 

Wednesday, June 21, 2023

Will AI Mostly Produce "Faster and Cheaper" or "Better?"

Despite all the hype, artificial intelligence is going to produce benefits and outcomes that are primarily quantitative--faster or cheaper--rather than qualitative, where “better” can include the ability to create whole new products and industries, plus revenue and business models, that didn't exist before. 


Enterprises justify information technology investments primarily because they deliver quantitative outcomes in terms of productivity, revenue, cost savings, risk reduction or higher productivity. These outcomes are important because they can be measured in some way, and are tangible outcomes. 


In that sense, IT investments might be evaluated--if with great difficulty--the same way all other investments get evaluated: using standard accounting metrics such as net present value, return on investment, payback period, internal rate of return, total cost of production or economic value added. 


Net Present Value (NPV)

NPV is one of the foremost financial key performance indicators (KPIs) used to evaluate large, capital-intensive IT projects. NPV relies on accurate cash flow projections extending over the life of the project, alongside a discount rate which is used to account for the time value of money. Project approval depends on obtaining a positive NPV. IT projects can also be compared with one another using NPV whenever firms need to ration scarce IT capital.

Return on Investment (ROI)

ROI is an accounting-based ratio that compares total project income to the level of project investment. ROI does not take account of the time value of money, meaning that projects with a longer-term return window would be treated on par with projects that generate equal returns over a shorter time period. Similar to NPV, the accuracy of ROI calculations depends on being able to identify the scale of future cash flows arising from an investment.

Payback period

The payback period is a simplistic method that calculates the time needed for a project to breakeven (recover its investment costs). In a risk averse firm, managers may gravitate towards IT projects with a shorter payback period. In practice, payback should not be used in isolation but rather alongside other metrics that take account of project risk and that consider the flow of benefits beyond the end of the payback period.

Internal Rate of Return (IRR)

Given all future cash flows and an upfront investment for an IT project, IRR is the discount rate that would return a value of zero for NPV. IRR can be considered the true rate of return in that it takes account of the time value of money and the flow of value over time. IRR can be benchmarked against desired or minimum rates of return, including the weighted cost of capital.

Economic Value Added (EVA)

EVA —also called economic profit—is a measure of residual value generated by a project after deducting the cost of invested capital. Since all capital can be allocated to different ends, EVA argues that projects should be assessed an investment cost. This allows for a more equitable comparison if managers are in a position to pick from different IT projects with unique rates of return.

Total Cost of Ownership (TCO)

TCO captures a multitude of different cost items in a single metric such as the cost of hardware, software, and services, allocated per application, user, department, etc. TCO can also be represented as a cost per period of time. TCO does not take into account the benefit or value to the organization of using the underlying resource and is, as such, a questionable metric unless accompanied by other metrics such as ROI, NPV or payback period.

source: Springer 


Whether the inputs to be measured are the application of mainframe, minicomputer or PC-based computing, use of client-server architectures, world processing, spreadsheets, business software, video streaming or digital transformation, the measured outcomes always seem to focus on quantitative improvements: faster or cheaper. 


Less tangible outcomes are cited, but are hard to quantify, such as brand enhancement or competitive advantage in core markets. Perhaps least-cited of all is the application of information technology to create entirely new products and industries. Such qualitative outcomes fall under the framework of “better.” 


source: ISSA


The use of mainframe computers in the 1960s and 1970s led to average  productivity gains in businesses of about 30 percent, according to Datamation. 


A study by the Aberdeen Group found that companies that invested in information technology had an average cost savings of 12 percent. A study by IDC found that companies that invested in IT had an average production rate increase of 15 percent.


A study by the Gartner Group found that companies that invested in IT had an average error rate reduction of 20 percent. A study by McKinsey found that companies that invested in IT had an average innovation cycle time reduction of 30 percent. 


A study by the Aberdeen Group found that businesses that invest in IT achieve an average ROI of 22 percent. A study by Gartner found that businesses that invest in IT can improve their customer satisfaction by an average of 15 percent.


A study by McKinsey found that businesses that invest in IT can reduce their costs by an average of 10 percent.


That is not to say “qualitative” outcomes are not cited. They simply are hard to quantify. Most would likely agree that the introduction of personal computers in the 1980s led to qualitative benefits for businesses that include improved decision-making, increased collaboration, and better customer service.


Smartphones, digital cameras, personal computers before them, perhaps tablets, videogame consoles, social media and the web are easy-to-understand applications of technology that created new products and industries, or, at the very least, transformed existing industries. 


Many similar claims could be made for the benefits of the internet, cloud computing and will be made for AI as well. In the end, perhaps most of the claimed benefits will be quantitative in nature. “Faster and cheaper” will be the advantages cited. 


Still, in some cases, “better” will be the outcome: whole new products, industries and revenue models will also be created. But that should not be the “main” outcome. Process improvements are likely to dominate, as they have in the past. 


Wednesday, November 19, 2014

“It Can’t be Done”

Some of the most-dangerous statements an experienced and knowledgeable executive ever can make is that something “cannot be done,” or that a new way of doing something is underpowered, under-featured and essentially a non-serious approach to solving a problem.

If confronted with a requirement to support huge amounts of bandwidth, hundreds of times to perhaps 1,000 times greater than anything yet seen, it might seem obvious that only fixed networks will be able to handle the load.

That is why Marcus Weldon, Bell Labs President and Alcatel-Lucent CTO believes sophisticated core and fixed networks are essential, and that explorations of Internet access networks using unmanned aerial vehicles or balloons are unsophisticated approaches little better than “toys,” compared to the best of today’s telecom networks.

The phrase "toy networks" as applied to new Internet access platforms such as balloons or unmanned aerial vehicles reflects a perhaps-understandable reaction to new networks that lack the sophistication of the existing and future networks envisioned by the telecom industry.

But it is profoundly dangerous to underestimate the threat posed by such underpowered or feature-deficient new approaches. You might recall that the same sort of sentiment was uttered about voice over Internet Protocol.

Disruptive innovation, a term coined by Harvard Business School Professor Clayton Christensen, describes a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.

Such innovations might reasonably be derided by existing suppliers as “not very good” products with limited feature sets, unstable quality and some restrictions on ease of use. Skype initially could only be used by people communicating using personal computers, for example.

Microwave Communications Corp. (MCI) originally competed with AT&T for long-distance voice calls using a microwave network that likewise was deemed less reliable than AT&T’s own network.

Wi-Fi hotspots originally were hard to find, sometimes difficult to log on to, and obviously did not have the ubiquity of mobile Internet access or the speed of an at-home Internet access service.

Netflix originally required mailing of DVDs to view content (it was not “on demand”), and could not be viewed on demand, on a variety of devices.

What happens, over time, is that disruptive attacks gradually “move up the stack” in terms of features and quality of service, eventually competing head to head with the incumbents.

If you live long enough, you might see many examples of such derision.

I can remember being at a meeting at the headquarters of the National Cable Television Association, in the earlier days of high definition television discussions, where it was proposed that a full HDTV signal could be squeezed from about 45 Mbps of raw bandwidth to the 6-MHz channelization used by the North American television industry.

The room essentially exploded, as the attendees, mostly vice presidents of engineering from the largest cable TV and broadcast firms, disagreed with the sheer physics of the proposal. Later, the executive who suggested HDTV in 6 MHz was indeed possible talked with his firm’s engineering vice president, about the the science, to reaffirm that such a thing actually could be done. “Are you sure about this?” was the question, given the magnitude of opposition.

To make a longer story short, it did prove feasible to compress a full HDTV signal into just 6 MHz of bandwidth, making for a much-easier financial transition to full HDTV broadcasting, as well as an ability for cable TV operators to support the new format.

Similarly, when the U.S. cable TV industry began to ask for analog optical transmission systems capable of carrying 20 channels of standard definition video without complicated channel-by-channel coding and decoding, a distinguished engineer from Bell Laboratories privately assured me that such a thing was in fact not possible, and that people who claimed it was possible were simply wrong.

To make a longer story short, it did indeed prove possible to take a full complement of analog video signals (40 channels, as it turned out), convert the full set of broadband signals to analog optical format, and deliver them over distances useful for cable TV purposes.

On another occasion, the vice president of one of the world’s biggest suppliers of equipment said privately that “digital subscriber line does not work” as a platform for high speed Internet access, even at relatively low speeds. Ultimately, that also proved incorrect. Over time, DSL performance was not only proven to be commercially viable, but also delivered much-faster speeds, over longer distances, as experience was gained.

The point is that when a smart, experienced, thoroughly-knowledgeable executive says that something “cannot be done,” one has to translate. What the statement means is only that, at a given point in time, before the application of effort and ingenuity, a given entity has not been able to do something.

That does not actually mean something literally “cannot be done.” Quite often, formerly impossible things actually are made possible, after dedicated investigation and development.

That sort of thing happens often enough that statements deriding novel approaches to solving problems should not be lightly dismissed. New platforms and approaches often do appear to be “toys” at first. But that is not where developments remain for all time.

Executives generally truly believe disruptive new platforms and approaches are unsatisfactory substitutes for higher-performance solutions. That often is quite true, at first. But substitute products often do not remain fixed at such levels. They often improve to the point that, eventually, the new approach is a workable solution for a wider range of applications and customer use cases.
 
Having lived long enough to see the “smart guys” proven quite wrong, I am careful never to argue something really cannot be done. Sometimes, somebody, or another company, is able to do so, even when a reasonable, smart, experienced practitioner “knows” it cannot be done.

Friday, September 18, 2020

How Much Post-Covid Change Will Businesses Achieve?

One often hears it said these days that one impact of Covid-19 on organizations and firms is that it will cause permanent changes in the ways businesses and organizations work. Most of those changes, though--agility, reaction speed, cost reduction, productivity changes, customer focus, innovation, operational resiliency, growth, financial performance, for example--were important organizational objectives before the Covid-19 pandemic. 


Even remote work on a full-time, part-time or episodic basis is a trend decades old, if many believe the difference is that a large percentage of information workers will shift to permanent remote work settings, and a larger number of observers might agree that many information workers will routinely work more often from home. 


One might note that very few of the key changes executives now say they have--because of their experience with the pandemic--can be addressed directly by better or more use of communications services. A recent survey by McKinsey found that “speed” was the driver of organizational changes related to Covid-19. 


source: McKinsey


What is not so clear is how much actual change has been accomplished, as organizational change normally is very difficult and also takes a long time. Consider the extensive changes needed to increase speed and agility.


Organizational silos, slow decision making, and lack of strategic clarity, for example, are impediments to speed. But do you really believe that big firms have been about to abolish silos, speed up decision  making and gain new strategic clarity in a few months' time?


Big barriers exist for real reasons. If they were easy problems they would have been fixed long ago. Few would doubt that executives say these long-standing issues are being addressed. But also, few of us might believe real progress is being made, fast. 


source: McKinsey


Rigid policies and formal hierarchy also are cited as impediments to speed. Have you heard of massive reductions of top level and mid-level management over the past few months?


Or consider the impressionistic claims some might make. “Higher meeting attendance and timeliness” resulted in faster decisions,” one survey respondent says. Do you really believe that? More people in meetings produced faster decisions? Much of the literature specifically argues that more people in meetings. reduces decision-making ability. 


It might be fair to hypothesize that meetings, as such, have had no discernible impact. Does anybody really believe holding more meetings improves output? In fact, there is evidence tot he contrary: more meetings mean less time for getting the actual work done. 


A team of researchers said this: “We surveyed 182 senior managers in a range of industries.  65 percent said meetings keep them from completing their own work. 71 percent said meetings are unproductive and inefficient. 64 percent said meetings come at the expense of deep thinking. 62 percent said meetings miss opportunities to bring the team closer together.”


Another leader notes that “communication between employees and executives has become more frequent and transparent, and as such, messages are traveling much more efficiently” through the organization. 


There already are signs of what might be called collaborative overload. “In most cases, 20 percent to 35 percent of value-added collaborations come from only three percent to five percent of employees. 


Colloquially, that proves the truth of the observation that “if you want something done, give it to a busy person.” The practical observation is that the highest performers are besieged with the greatest amount of demand for time spent in meetings and on work teams. At some point, the danger is that these high performers simply get asked to do too much, reducing their overall contributions and effectiveness. 


source: Harvard Business Review


Not to belabor the point, but value--assuming the insight is correct--can be gleaned by having many fewer people in meetings. 


Nor, for that matter, is it entirely clear how greater numbers of  team members working remotely actually addresses any of those aforementioned issues. Some changes could materially impact performance in a positive way. But the issue is how remote work, for example, materially abolishes silos, speeds up decision making, produces strategic clarity, abolishes rigid policies or reduces hierarchy. 


Some will argue that “employees like it.” The more accurate statement could be that “some employees prefer work from home, and some do not prefer it.” But productivity is not a matter of what people believe. It is a matter of fact, to the extent we can measure it. 


Productivity. is often hard to measure--perhaps almost impossible for information workers--and employees claiming they are productive does not make it so. Also, what we can measure might not actually correlate well with actual output. Quantity is not quality, in other words. Innovative ideas and creativity might not be measurable at all, except by reputation. 


Productivity measurements in non-industrial settings are difficult, as it often is difficult to come up with meaningful quantitative measurements that provide insight. We might all agree that not all tasks create the same value for any organization. Productivity can be described as the relationship between input and output, but “output” is tough to measure. 


And what can be measured might not be relevant. 


Even some who argue work from home productivity is just as high as “in the office” note that work from home productivity is only one percent lower than in the workplace.  


Some argue productivity now is higher or equal to productivity when most people were in offices. Some of us would argue we do not yet have enough data to evaluate such claims or evaluate the sustainable productivity gains. It is one thing when all competitors in a market are forced to have their work forces work from home.


It will be quite something else when WFH is a business choice, not an enforced requirement. As might be colloquially said, widespread WFH will last about as long as it takes for a key competitor not working that way begins to take market share. 


Not to deny that post-pandemic, some firms will find meaningful ways to restructure business processes to gain agility, productivity and speed, but the actual gains will be slower to realize than many expect, as organizational resistance to such changes will be significant. Resistance to change is a major fact of organizational life. 


Monday, December 3, 2012

Text messaging turns 20

The first SMS was sent as a Christmas greeting in December 1992, the Guardian notes. Adoption took a while, and was not terribly widely used in all markets. In fact, text messaging began to get serous traction around the year 2000. So it was eight years before lots of people started to use the new too.. 

That's worth keeping in mind: even the most-useful consumer innovations can take some time to become widespread.

As a rule of thumb, an innovation that becomes widely used starts to grow much faster once it reaches about 10 percent penetration. But how long it takes to reach 10 percent can vary widely. 
 

Looking only at AT&T, you can see that text messaging volumes did not actually begin to build until after 2007, for example, despite having been available for more than a decade prior. 




And even in the United Kingdom, where consumers adopted the text messaging habit earlier, you can see that dramatic growth happened sometime around 2000. 

Roughly the same trend can be noted for global usage. Growth accelerated only around 2000. 





Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...