Showing posts sorted by date for query bandwidth consumption. Sort by relevance Show all posts
Showing posts sorted by date for query bandwidth consumption. Sort by relevance Show all posts

Wednesday, October 16, 2024

How Should Governments Subsidize Unlimited Internet Access?

A new Federal Communications Commission inquiry on data caps does raise some obvious issues, as the action is driven by complaints by internet users about the very existence of such caps, which might be likened to rationing of a resource, says FCC Chairwoman Jessica Rosenworcel. At least so far, the concerns seem directly related to data limits related to the cost of service plans.


In the press release announcing the inquiry, the FCC included complaints such as “We can't afford $190 a month for unlimited internet.” Other cited complaints revolve around “excessive costs” or present data caps being insufficiently capacious. 


On the other hand, differing prices based on differing consumption volume is a fairly-standard principle in most businesses and industries, though some subscription services, such as linear video, satellite radio or other audio streaming services routinely operate on an “unlimited consumption” model. 


So do many mobile phone voice and texting service plans, at least for domestic usage.


As a rule, consumers seem to expect volume-related pricing for physical goods and most intangible products, whether that is water, electricity, groceries, fuel or clothing. 


And, for providers, total costs of creating and providing a service or product matter, not only the cost of one particular part of the value chain. Some observers focus only on the marginal cost of providing the next unit of consumption, not full costs (capital investment and all operating costs). 


The total cost of providing an internet access service arguably differs for large, dominant providers and smaller local providers. 


Though network Infrastructure, bandwidth and transit costs are of high importance for all ISPs, smaller, regional ISPs might tend to find that bandwidth and infrastructure costs dominate, whereas larger ISPs might find that marketing, research and development costs, and regulatory compliance may take on greater importance.


Customer premises equipment, labor, marketing and customer acquisition costs generally are of medium importance for all ISPs. 


But large ISPs might find the costs of regulatory compliance, research and development as well as energy costs to be of more significance, compared to how those issues pertain to small ISPs. 


Study Title

Date

Publication Venue

Key Conclusions

"The Economics of Internet Traffic"

2002

Journal of Economic Perspectives

Found that marginal cost is generally low, especially for peak-time traffic.

"The Cost of Internet Traffic: A Study of Residential Broadband Access"

2005

Telecommunications Policy

Examined the cost structure of residential broadband access and concluded that marginal cost is relatively low.

"The Economics of Internet Traffic: A Critical Review"

2010

Telecommunications Policy

Provided a comprehensive overview of research on the economics of Internet traffic, highlighting the challenges of measuring marginal cost accurately.

"The Cost of Internet Traffic: A Survey of Recent Studies"

2014

Telecommunications Policy

Summarized key findings from recent studies on the marginal cost of Internet traffic, emphasizing the importance of network congestion and traffic patterns.


The point is that the marginal cost of providing the next unit of capacity or consumption might not be the only measure, or the best measure, of cost and its relationship to consumer pricing. Providers can affect some of their total costs. But many fundamental costs, including network infrastructure, are relatively inelastic. 


Other costs have some elasticity, but can be hard to contain in highly-competitive markets. So the actual marginal network cost of producing the next unit of capacity might not be the best metric for assessing the “fairness” of access pricing. 


The larger issue, perhaps, are the sustainable business models allowing internet service providers to continually expand capacity, providing the needed usage support for consumers, at prices those consumers consider fair and reasonable. All of that is dynamic.


To the extent that “cost of use” is a financial problem, governments routinely use subsidies of various types to support consumption of essential or important goods by some citizens who would not otherwise be able to afford such goods. 


As always, the issue of “who pays” has to be answered in the concrete. To the extent that ISP sustainability literally requires profits, providers have to keep working on efficiencies so they can keep costs “reasonable.” And observers debate the degree to which customer usage volume actually matters.


Logically, marginal costs exist when customers use more of a resource. But how much marginal cost exists is an issue. Fixed or sunk costs might actually predominate. But again, subsidy programs can be created that address the needs of specific populations deemed to require support.


Friday, October 4, 2024

Why Marginal Cost of Content Creation is Generative AI's Superpower

Virtually every observer might agree that artificial intelligence will automate laborious tasks and therefore increase process efficiency. AI should also accelerate decision making, as it enables rapid information processing. 


podcast of this content


AI should enable more personalization than already is possible for user interactions and experiences and as a byproduct could change the nature of work, entertainment and learning. 


Generative AI, though, might bring cost impact in different ways than did other computing innovations. Virtually all computing eras since the advent of the personal computer have led to lower marginal costs of doing things. 


PCs meant computing power itself was widely available to people. The internet attacked the cost of sharing information and communicating while cloud computing arguably reduced software distribution costs while boosting the ability to apply accumulated data and insights more widely in real time. 


The mobile era extended computing capabilities “everywhere” and untethered from desks, tables or laps. 


Era

Computing Paradigm

Marginal Cost Implications

PC

Personal Computing

- High upfront costs for hardware and software

- Relatively high marginal costs for upgrades and maintenance

- Limited scalability

Internet

Networked Computing

- Reduced costs for information sharing and communication

- Increased accessibility, but still significant infrastructure costs

- Marginal costs tied to bandwidth and server capacity

Cloud Computing

On-Demand Computing

- Significantly lower upfront costs

- Pay-as-you-go model reduces marginal costs

- Improved scalability and flexibility

- Potential for cost optimization through resource management1

Mobile

Ubiquitous Computing

- Lower device costs compared to PCs

- App-based ecosystem with low distribution costs

- Increased connectivity, but data costs can be significant

Future AI

Intelligent Computing

- Potential for near-zero marginal costs in some applications

- High initial investment in AI development and infrastructure

- Continuous learning and improvement may reduce long-term costs2


So it is reasonable to ask what the AI impact will be, especially generative AI, which seems to be driving mass market and most business AI use cases. 


Angela Strange, Andreessen Horowitz general partner and James da Costa Andreessen Horowitz partner, specialized in enterprise and business-to-business software, including financial technology. 


They believe the AI era leads to lower marginal cost of client and customer interactions, using AI agents to reduce the cost of labor involved in many customer support operations, including those involving information retrieval (files, ledger entries, past transitions, billing and account status). 


source: Andreessen Horowitz 


As applied in many areas outside of financial technology, the value of generative AI is squarely on its impact on content creation. 


Whether we look at text, image, video or audio, GenAI seems destined to have the highest impact on any process or industry built on content creation and its distribution or consumption. GenAI will be useful in any number of customer support contexts, but might be impactful in financial terms for the production of software and code; entertainment content; education and training; business communications; many types of research; marketing and sales. 


Wednesday, October 2, 2024

Where AI Will Drive the Greatest Value Chain Impact


Virtually every observer might agree that artificial intelligence will automate laborious tasks and therefore increase process efficiency. AI should also accelerate decision making, as it enables rapid information processing. 


AI should enable more personalization than already is possible for user interactions and experiences and as a byproduct could change the nature of work, entertainment and learning. 


Generative AI, though, might bring cost impact in different ways than did other computing innovations. Virtually all computing eras since the advent of the personal computer have led to lower marginal costs of doing things. 


PCs meant computing power itself was widely available to people. The internet attacked the cost of sharing information and communicating while cloud computing arguably reduced software distribution costs while boosting the ability to apply accumulated data and insights more widely in real time. 


The mobile era extended computing capabilities “everywhere” and untethered from desks, tables or laps. 


Era

Computing Paradigm

Marginal Cost Implications

PC

Personal Computing

- High upfront costs for hardware and software

- Relatively high marginal costs for upgrades and maintenance

- Limited scalability

Internet

Networked Computing

- Reduced costs for information sharing and communication

- Increased accessibility, but still significant infrastructure costs

- Marginal costs tied to bandwidth and server capacity

Cloud Computing

On-Demand Computing

- Significantly lower upfront costs

- Pay-as-you-go model reduces marginal costs

- Improved scalability and flexibility

- Potential for cost optimization through resource management1

Mobile

Ubiquitous Computing

- Lower device costs compared to PCs

- App-based ecosystem with low distribution costs

- Increased connectivity, but data costs can be significant

Future AI

Intelligent Computing

- Potential for near-zero marginal costs in some applications

- High initial investment in AI development and infrastructure

- Continuous learning and improvement may reduce long-term costs2


So it is reasonable to ask what the AI impact will be, especially generative AI, which seems to be driving mass market and most business AI use cases. 


Angela Strange, Andreessen Horowitz general partner and James da Costa Andreessen Horowitz partner, specialized in enterprise and business-to-business software, including financial technology. 


They believe the AI era leads to lower marginal cost of client and customer interactions, using AI agents to reduce the cost of labor involved in many customer support operations, including those involving information retrieval (files, ledger entries, past transitions, billing and account status). 


source: Andreessen Horowitz 


As applied in many areas outside of financial technology, the value of generative AI is squarely on its impact on content creation. 


Whether we look at text, image, video or audio, GenAI seems destined to have the highest impact on any process or industry built on content creation and its distribution or consumption. GenAI will be useful in any number of customer support contexts, but might be impactful in financial terms for the production of software and code; entertainment content; education and training; business communications; many types of research; marketing and sales. 

Monday, September 30, 2024

Amara's Law and Generative AI Outcomes: Less than You Expect Now; More than You Anticpate Later

Generative artificial intelligence is as likely to show the impact of Amara's Law as any other new technology, which is to say that initial outcomes will be less than we expect, while long-term impact will be greater than we anticipate.


Amara’s Law suggests that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.


Source


Amara’s Law seemingly is the thinking behind the Gartner Hype Cycle, for example, which suggests that initial enthusiasm wants when outcomes do not appear, leading to disillusionment and then a gradual appearance of relevant outcomes later. 


lots of other "rules" about technology adoption also testify to the asymmetrical and non-linear outcomes from new technology.  


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara and some people call it “Gate’s Law.”


The principle is useful for technology market forecasters, as it seems to illustrate other theorems including the S curve of product adoption. The expectation for virtually all technology forecasts is that actual adoption tends to resemble an S curve, with slow adoption at first, then eventually rapid adoption by users and finally market saturation.   


That sigmoid curve describes product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


Another key observation is that some products or technologies can take decades to reach mass adoption.


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in use. 


Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.



source: Wikipedia


The S curve is related to the product life cycle, as well. 


Another key principle is that successive product S curves are the pattern. A firm or an industry has to begin work on the next generation of products while existing products are still near peak levels. 


source: Strategic Thinker


There are other useful predictions one can make when using S curves. Suppliers in new markets often want to know “when” an innovation will “cross the chasm” and be adopted by the mass market. The S curve helps there as well. 


Innovations reach an adoption inflection point at around 10 percent. For those of you familiar with the notion of “crossing the chasm,” the inflection point happens when “early adopters” drive the market. The chasm is crossed at perhaps 15 percent of persons, according to technology theorist Geoffrey Moore.

source 


For most consumer technology products, the chasm gets crossed at about 10 percent household adoption. Professor Geoffrey Moore does not use a household definition, but focuses on individuals. 

source: Medium


And that is why the saying “most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is so relevant for technology products. Linear demand is not the pattern. 


One has to assume some form of exponential or non-linear growth. And we tend to underestimate the gestation time required for some innovations, such as machine learning or artificial intelligence. 


Other processes, such as computing power, bandwidth prices or end user bandwidth consumption, are more linear. But the impact of those linear functions also tends to be non-linear. 


Each deployed use case, capability or function creates a greater surface for additional innovations. Futurist Ray Kurzweil called this the law of accelerating returns. Rates of change are not linear because positive feedback loops exist.


source: Ray Kurzweil  


Each innovation leads to further innovations and the cumulative effect is exponential. 


Think about ecosystems and network effects. Each new applied innovation becomes a new participant in an ecosystem. And as the number of participants grows, so do the possible interconnections between the discrete nodes.  

source: Linked Stars Blog 


Think of that as analogous to the way people can use one particular innovation to create another adjacent innovation. When A exists, then B can be created. When A and B exist, then C and D and E and F are possible, as existing things become the basis for creating yet other new things. 


So we often find that progress is slower than we expect, at first. But later, change seems much faster. And that is because non-linear change is the norm for technology products. So is Amara’s Law.


Will Generative AI Follow Development Path of the Internet?

In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...