Saturday, August 10, 2024

Is the Print Disruption Relevant for Video Entertainment?

It might be fair to say that video entertainment distributors and content providers face business model challenges that rival those faced by publishers decades ago, when digital media disrupted physical media. 


For example, total revenue for newspaper publishers dropped significantly from $46.2 billion in 2002 to $22.1 billion in 2020, a 52 percent decline, according to the U.S. Census Bureau. Estimated revenue for periodical publishing, which includes magazines, fell by 40.5 percent over the same period.


U.S. Census Bureau 


Print advertising revenue saw a significant decline, from $73.2 billion in 2000 to $6 billion in 2023. 


Similar damage was seen in the video on demand business, where video tape and disc rental revenue decreased by 88.5 percent.


The point is that advertising revenue began a long shift to digital and online media after 1996, now perhaps representing 75 percent of all U.S. advertising. 

source: Omdia 


How suppliers of linear video products will fare is the issue. Warner Brothers Discovery just booked a $9 billion impairment charge, while Paramount  took a charge of about $6 billion, for example, reflecting a devaluation of linear video asset value.


And while it is easy to note that video streaming is the successor product to linear TV, the business models remain challenged, for distributors and content suppliers. 


Distributor revenue is an issue, as linear TV could count on both advertising and subscription revenues, while video streaming services mostly rely on subscriptions, though ad support is growing.


The other revenue issues are lower average revenue per user or per account, compared to linear TV and high subscriber churn. 


Also, content costs have been higher than for linear services, as original content uniqueness has been more important than for most linear channels. Customer acquisition costs and marketing expenses also are higher in a direct-to-consumer environment. 


Video content providers also face significant business model challenges as the industry shifts away from traditional linear distribution, in large part because carriage fees paid to content owners by distributors are not available, while the core revenue model shifts from reliance on distributors to a “direct-to-consumer” model. 


The linear model provided large potential audiences with a wholesale sales model, where the customer was the video distributor.  The DTC model has to be created from scratch, and requires actual retail sales. 


Also, up to this point revenue has been based on subscriber fees, and most streaming services have struggled to gain scale.  


Linear video also featured a clear windowing strategy and revenue opportunities (when content was made available theatrically, then to broadcast, followed by cable and then syndication). Steaming is much more complex, with less certain revenue upside. 


In principle, linear programming formats were geared to large audiences while much streaming content is aimed at niches and segments. That arguably represents higher risk and higher unit costs. 


Using the prior example of what happened with print content as online content grew, we might well expect profit margins for linear media to decline as content consumption shifts to streaming formats and linear scale is lost. 


We should also see a shift of market share from legacy providers to upstarts, with perhaps the greatest pressure on niche or specialty formats such as industry specific sources. At some point, as legacy industries consolidate and shrink, there is less need for specialized media or conferences, for example. 


The demise of virtually all the former telecom industry; personal computer and enterprise computing  events and business media provide an example. 


Content producer shares also could shift. We already have seen new “studios” including Netflix, Amazon Prime, Apple TV and others arise, for example. 


To the extent they were able to survive, print media had to innovate revenue models, moving from print advertising and subscriptions to digital ads and paywalls, though the scale of the new businesses was less than the former business models. 


Similarly, video streaming services are exploring tiered subscriptions, ad-supported models, and content bundling, though we might speculate that the new business could well be smaller than the former industry. Some degree of fragmentation should be expected, where the legacy providers have less scale than before. 


All that could lead to a “professional” video entertainment industry that is smaller than it has been in the past, as hard as that might be to imagine. The model there is the growth of social media and user-generated content revenues compared to “professional” content provider revenues. 


For example, it is possible that digital platforms--Netflix, Prime Video and others--replace the traditional “TV broadcasters” or “cable TV networks” as the dominant “channels.” 


It also is conceivable that the content producers (studios) emerge as dominant in their “direct to consumer” roles, disintermediating the present distributors (TV broadcast networks, cable TV networks). Or, legacy networks and distributors might find some way to continue leading the market, as unlikely as some believe that is to happen. 


In the medium term, we might see a pattern similar to the former “print” industry, where a few legacy providers continue to have significant share, but where upstart new providers also have arisen. The legacy video networks might continue to represent venues for highly-viewed events, although much of the rest of viewing is fragmented among many suppliers.  


If the earlier transformation of “print” content is relevant for video entertainment, legacy businesses might never again be as large or important as they once were.


Friday, August 9, 2024

Synthetic Data Might be Quite Useful for Domain-Specific or Privacy-Critical Use Cases

There might be upsides and downsides as generative artificial intelligence systems--after crawling the whole internet--likely start to learn from each other. To be sure, some new data stores conceivably can be crawled, but that process will increasingly be expensive and involve much smaller, more-specialized sets of data, such as some proprietary enterprise content. 


But all that will be incremental. What is likely to happen is that models start to learn from each other, using “synthetic data” that is artificially generated mimicking real-world data in its statistical properties and structure, but without actual real-world data points. 


That could have both good and bad implications. Perhaps synthetic data can help compensate for scenarios where training data is under-represented or unavailable. That can help improve model performance and robustness. 


Model Type

Benefits from Synthetic Data

Domain-Specific Models (e.g., medical, legal, financial)

Access to large, private, and high-quality datasets is crucial for performance. Synthetic data can bridge this gap.

Models for Low-Resource Languages

Synthetic data can augment limited real-world data, improving model performance for languages with fewer available resources.

Models Requiring Diverse and Sensitive Data

Generating synthetic data can protect privacy while providing exposure to diverse scenarios, reducing biases.

Models for Data Augmentation

Synthetic data can expand training datasets, improving model robustness and generalization.


Since synthetic data doesn't contain real individuals' information, it can be used to train language models on sensitive topics without risking privacy violations.


Carefully generated synthetic data can be used to balance datasets and reduce biases present in real-world data, potentially leading to fairer language models. 


In domains where real data is scarce or expensive to obtain, synthetic data might provide a viable alternative for training language models. Cost effectiveness is a possible advantage as well. 


Also, models could be pre-trained on large synthetic datasets before fine-tuning on smaller real-world datasets, potentially improving performance in data-limited domains. Likewise, synthetic data could be generated to support training for languages with limited real-world data available.


On the other hand, there are potential downsides. When AI systems learn from each other, there's a risk of amplifying existing biases present in the original training data. As models build upon each other's outputs, subtle biases can become more pronounced over time.


With AI systems learning from each other, there's a danger of converging on similar outputs and losing diversity of perspectives.


Of course, it might not always be the case that synthetic data accurately represents real-world scenarios. The same danger exists in terms of models learning incorrect information from other models.


But there are many use cases where synthetic data is necessary or useful, including domain-specific models or privacy-sensitive models.


Antitrust Might be a Bigger Problem than GenAI

Perceptions and performance can change very quickly in the generative artificial intelligence business.


Consider the most-recent financial report from Alphabet, considered perhaps the most vulnerable of hyperscale app firms to new competition coming from generative AI rivals who see a chance to recast the search business. 


And the biggest danger to Alphabet arguably is regulatory, not AI impact. Antitrust action, a threat to all the hyperscalers, now already has emerged for Alphabet.  On August 3, 2024, a possible landmark ruling by U.S. District Judge Amit Mehta found Google guilty of violating antitrust laws by maintaining an illegal monopoly in the search engine market, as it pays partners such as Apple for default status on Apple devices. 


Though remedies are not yet clear, it is possible there is some future diminution of the market value of Google’s default app status, even if users can easily change default search providers at will. 


Separately, Meta’s open source approach to Llama also could have key market impact.


Venture capital firm Andreessen Horowitz suggests enterprises are quite open to open source generative AI. “We estimate the market share in 2023 was 80 percent to 90 percent closed source,” Andreessen Horowitz says. “However, 46 percent of survey respondents mentioned that they prefer or strongly prefer open source models going into 2024.”


Over time, “enterprises expect a significant shift of usage towards open source, with some expressly targeting a 50/50 split—up from the 80 percent closed/20 percent open split in 2023,” Andreessen Horowitz notes. 

Enterprises’ growing preference for open-source AI models

source: Andreessen Horowitz 


That noted, such impact is not yet seen. 


Total Alphabet revenue in the three months ended June 30, 2024 rose 13.6 percent year over year to $84.74 billion, outpacing the $84.19 billion expected. Earnings per share jumped 31 percent on an annual basis to $1.89, exceeding the $1.84 expected.


With the caveat that we are early in the transition, competition is not yet seen in the financial results.  Though investors were--and might still be--concerned about what generative AI might mean for Alphabet’s search revenue, the opposite appears to be the case, at least for now. 


“People who are looking for help with complex topics are engaging more and keep coming back for AI overviews,” Sundar Pichai, Alphabet CEO said, noting high engagement from younger users aged 18 to 24 when they use search with AI overviews.


The danger of some market share cannibalization is so far offset by continued growth of search revenues. So even if some new competitors hope to take share, the market itself continues to grow. And Google arguably continues to hold a lead in scale. 


Alphabet Threats

Alphabet Opportunities

Increased Competition: Alphabet faces competition from new generative AI search engines, such as OpenAI's, which pose a threat to its market dominance.

Enhanced Search Capabilities: Integrating generative AI into Google's search capabilities allows for better responses to complex queries and innovative search methods, attracting users, especially younger demographics.

Market Share Risk: The rise of AI-driven competitors could erode Alphabet's dominant market share in search.

Revenue Growth: Generative AI has contributed to better-than-expected search revenues, indicating potential for continued growth.

Technological Challenges: There is a risk of falling short in AI advancements, which could impact Alphabet's competitive edge.

User Engagement: AI features have led to high engagement levels, with users increasingly returning for AI-generated summaries.

Investor Concerns: Despite proactive measures, there are ongoing concerns about the impact of AI competition on Alphabet's stock performance.

Cost Management: Effective management of expenses and investments in generative AI can lead to improved operating margins and cost efficiencies.

Pressure on Innovation: The need to continuously innovate in AI to stay ahead of competitors adds pressure on Alphabet.

Strategic Investments: Alphabet has a chance to define the generative AI category


It will take some time for the appeals process to play out (years), and then the key issue of remedies has to be settled. In the meantime, it remains unclear whether GenAI will actually harm Google search to any serious degree. 


A similar antitrust ruling in the European Union in 2018 forced Google to allow users to choose a different search engine as their default when setting up a new Android phone, while allowing the company to charge other search engines for the right to be included as an alternative. So far, this ruling has had a negligible effect on Google’s market share in Europe, most would say. 


The 1998 antitrust ruling against Microsoft prohibited bundling of browsers with operating systems arguably has not impeded Microsoft’s growth, though some would say the ruling allowed Google to emerge as the leader in browsers. 


AT&T was broken up (seven new local access firms plus long-distance provider AT&T), but the firm has largely been reassembled. And in that case, as well as with IBM, one might plausibly argue that other platform changes (the internet; TCP/IP; the decomposition of services into layers; personal computing; cloud computing) were more responsible for fundamental changes than the antitrust actions. 


Firm

Antitrust Action Year

Business Model Change

Actual Damage to Firm

Microsoft

1998

Allowed competing browser software on Windows

Loss of browser market dominance, but questionable serious financial damage

AT&T

1982

Divided into smaller companies (Baby Bells)

Increased competition in telecom industry but AT&T reassembled

IBM

1969-1982

Unbundled software from hardware sales

Slowed growth, emergence of competitors (PCs, smartphones)

Google

2020-present

Potential changes in search and ad practices

Not clear, yet


Perhaps the point is that antitrust action is seldom welcome by the firms to which it is applied. But the long term impact is hard to discern. 


And though the impact of GenAI might also be significant for incumbents, it remains to be seen whether that impact is positive, neutral or negative. 


Thursday, August 8, 2024

How Big a Market for AI Data Center Interconnection?

Lumen Technologies has announced it will receive 10 percent of Corning’s total optical fiber production for a period of two years, to support additional capacity supporting “data center to data center” connections supporting artificial intelligence operations. 


The additional fiber is said to double Lumen route miles, but also support an order of magnitude expansion of “inside the data center” optical connections as well. Some might say the new move supports Lumen’s earlier move to emphasize its private connectivity fabric.


And while estimates vary, all observers would note that Lumen is a major provider of public optical networking services in the U.S. market. By some estimates, Lumen is the largest supplier. 



Optimists might say the new effort to create capacity connecting AI data centers is “new business.” Skeptics might say it is simply the evolution of the existing business. But Christopher Stansbury, CFO, says the new AI-related deals are “largely all incremental.” 


“This isn't cannibalization of legacy at all,” he notes. 


Research FirmAI "Data Center to Data Center" Interconnection Services (Capacity) ForecastOverall "Data Center to Data Center" Capacity Market ForecastGartner


The AI "data center to data center" interconnection services market is expected to grow from $4.2 billion in 2023 to $9.1 billion by 2027, a CAGR of 16.8 percent, according to analysts at Gartner.


Gartner also forecasts the overall "data center to data center" capacity market is forecast to grow from $48 billion in 2023 to $72 billion by 2027, a CAGR of 10.6 percent.


According to IDC, the AI-driven "data center to data center" interconnection services will account for 22 percent of the total "data center to data center" capacity market by 2025, up from 15 percent in 2023.


The global "data center to data center" capacity market is projected to reach $64 billion by 2025, growing at a CAGR of 12.7 percent from 2023, IDC also believes.


Forrester researchers believe the market for AI-enabled "data center to data center" interconnection services will grow at a CAGR of 19 percent from 2023 to 2027, reaching $8.5 billion by 2027.


Forrester also estimates the overall "data center to data center" capacity market will grow at a CAGR of 11 percent from 2023 to 2027, reaching $68 billion by 2027.


Markets and Markets predicts the AI "data center to data center" interconnection services market will grow from $4.6 billion in 2023 to $10.2 billion by 2027, at a CAGR of 17.3 percent .The global "data center to data center" capacity market is forecast to grow from $51 billion in 2023 to $78 billion by 2027, at a CAGR of 11.2 percent, the firm says. 


Either way, “data center to data center” connections are an important niche or segment of the capacity business, including sales to third party customers as well as captive capacity owned and operated by hyperscalers including Alphabet, Amazon and Meta, which consume much capacity to support their own operations. 


Though it is virtually impossible to know with great precision, most estimates of active traffic on global long-haul networks point to the importance of “data center to data center” connections as a percentage of total active connectivity. For example, broad consensus seems to exist that between 30 percent and perhaps as much as half of active capacity now is used to support connections between data centers and other data centers. 


By definition, that data center traffic is all enterprise or wholesale, while traffic between internet PoPs might be either retail or wholesale; consumer or enterprise. 


Category

Global Capacity

Data Center Interconnections

30-50%

Internet Points of Presence (PoPs)

30-40%

Enterprise Networks

10-15%

Other

5-10%


The contribution of AI data center interconnection to Lumen financial results remains to be seen. But it is an important segment of the connectivity business and among the largest segments.


Monday, August 5, 2024

AI Capex Apparently Not an Issue So Long as a Firm is Making Lots of Money

The last quarter’s financial reporting suggests financial analysts and investors can tolerate uncertain capital investments in artificial intelligence without qualms so long as the topline revenue and bottom line profits are there. 


Or at least that would seem to be the case, as Meta did not immediately suffer when it announced it was raising its AI capex, though Google was hammered. 


Amazon and Microsoft had mixed reports: Amazon reporting smaller-than-expected revenue growth and profits, Microsoft actually beating expectations for revenue and profit, but showing some weakness in cloud computing. 


Though all the firms’ stock prices were hit by a big downdraft in early August, Meta has held up best. 


Perhaps Meta’s reported ad revenue growth, along with its planned AI investments, shows one way to calm observers: keep making money while investing. That might be easier for some than others, as Amazon might face consumer spending headwinds. For Google, the issue seems more related to concerns about growing search competition, Microsoft could be dinged by a recession that slows enterprise information technology spending. 


In the near term, in other words, so-called “AI stocks” might be evaluated based primarily on how their core legacy businesses are situated, irrespective of any future benefit from AI operations and products, since such benefits will not be obvious for some time. 


And the revenue results might not match the market reaction. As was the case at Alphabet and Meta, Microsoft revenue was up 15 percent, year over year in the second quarter of 2024. Amazon revenue also was up, but less than financial analysts had expected. 

Except at Amazon, revenue growth was not the immediate issue. Instead, capex growth is the concern.

Al capex is up at Microsoft, but “roughly half of FY2024's total capital expense as well as half of fourth-quarter expense, it's really on land and build and finance leases, and those things really will be monetized over 15 years and beyond,” said Amy Hood, Microsoft CFO. 

Over at Alphabet, second-quarter 2024 revenues were up 14 percent (15 percent) in constant currency, year over year. But AI capex is expected to hit $50 billion in 2024. 


Market watchers seem to see danger for Alphabet’s search revenue stream as rival AI suppliers seek to cut into Google’s search dominance, beyond the issue of AI capex magnitude. 


 “The risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where it turns out that we are over investing,” said Sundar Pichai, Alphabet CEO. 


It also is worth noting that, because of regulatory scrutiny, it no longer is possible for Alphabet to “acquire” positions in markets or capabilities. Instead, it has to grow them organically. 


The implications of Meta’s positioning on AI capex might be about as good as it gets: robust core revenue drivers able to support AI capex. Alphabet’s ad growth was not as good as Meta’s in the second quarter, and beyond that there are the concerns of search market share dangers. 


“While we expect the returns from Generative AI to come in over a longer period of time, we’re mapping these investments against the significant monetization opportunities that we expect to be unlocked,” Zuckerberg noted.


In the particular case of Meta, at least some analysts and observers will be heartened by the apparent recognition on Meta’s part that AI is the more-immediate opportunity, compared to augmented reality, for example. 


“A few years ago, I would have predicted that holographic AR would be possible before Smart AI, but now it looks like those technologies will actually be ready in the opposite order,” said Zuckerberg. 


Commenting on “the AI platform shift,” Similar to the cloud, this transition,  Satya Nadella, Microsoft Chairman and CEO, noted that AI is similar to the prior transition to cloud computing, involving  capital-intensive investments

In other words, investment has to be made. Public companies that continue to have strong revenue and profit growth will essentially get a pass. But other firms will face greater scrutiny.


Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...