Monday, September 2, 2024

When "Less" Personalization is a Good Thing

Recommendation and personalization algorithms almost always use a user’s past behavior as a guide to predicting content. That is very useful--and creates ad efficiency--for sellers of specific products and services purchased by specific consumers. 


But it is not a completely beneficial practice, in some instances.


When algorithms leverage user data, such as search history, clicks, purchases, and interactions or dwell time with content, to tailor results and recommendations, they also create echo chambers and filter bubbles for content related to ideas, news and information useful for citizens (not as consumers). 


That might not be an issue for advertisers selling niche, specialty or inherently-targeted products, or the users who have those interests. Suppliers selling products for surfers--and surfers--might not care at all about echo chambers or filter bubbles. 


The issues are more acute for news and information related to citizens rather than consumers. In such cases, past behavior can mean that  users are exposed to a limited range of information that aligns with their existing beliefs and preferences. And we can argue that this is generally unhelpful for civic life. 


So filter bubbles and echo chambers arguably are not much of an issue for advertisers. The same cannot be said for news and information providers whose products supposedly are designed to inform the public; deal with truth; and do so in fair and balanced ways. 


Study Name

Date

Publishing Venue

Key Conclusions

"The Filter Bubble: What the Internet Is Doing to Your Brain"

2011

Farrar, Straus and Giroux

Argues that online algorithms can create personalized filter bubbles, limiting users' exposure to diverse information.

"The Effect of Algorithmic Personalization on Political Polarization"

2018

Proceedings of the ACM on Web Science

Finds that algorithmic personalization can exacerbate political polarization by exposing users to content that reinforces their existing beliefs.

"Algorithmic Fairness in Recommender Systems"

2019

IEEE Transactions on Knowledge and Data Engineering

Examines the potential for bias in recommender systems and proposes techniques to mitigate bias.

"The Impact of Algorithmic News Personalization on Political Polarization"

2020

Proceedings of the ACM on Human-Computer Interaction

Investigates how algorithmic news personalization can affect political polarization and engagement.

Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers

2022

NCBI

Most empirical research found little evidence of algorithmically generated informational seclusion. People online engage with information opposing their beliefs.

What are Filter Bubbles and Digital Echo Chambers?

2022

Heinrich Böll Foundation

The role of algorithmic curation in creating bias is limited. User vulnerability to lack of diverse content depends more on motivation and broader information environment.

Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption

2020

MIS Quarterly

Increased Facebook use was associated with increased information source diversity and a shift toward more partisan sites in news consumption.

A scientific study from Wharton on personalized recommendations


Wikipedia

Found that personalized filters can create commonality, not fragmentation, in online music taste.

Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers

2022

NCBI

Most empirical research found little evidence of algorithmically generated informational seclusion. People online engage with information opposing their beliefs.

"The Filter Bubble: What the Internet is Hiding from You"

2011

Book by Eli Pariser

Personalization algorithms can isolate individuals from diverse perspectives, reinforcing their pre-existing beliefs and creating a "filter bubble."

"How Algorithms Create and Prevent Filter Bubbles: A Theory of Refracted Selective Exposure"

2015

Journal of Communication

Algorithms can both reinforce and mitigate filter bubbles. The extent to which they do depends on the design of the algorithm and users' existing preferences.

"Breaking the Echo Chamber: Mitigating Selective Exposure to Extreme Content"

2017

Proceedings of the ACM

Echo chambers can be mitigated by introducing diverse content in algorithmic recommendations, though this depends on user engagement with such content.

"Exposure to Ideologically Diverse News and Opinion on Facebook"

2015

Science

Personalization on Facebook does expose users to some ideologically diverse content, but the overall effect is that users tend to see more content that aligns with their pre-existing views.

"Algorithmic Accountability: A Primer"

2016

Data & Society Research Inst.

Algorithms often lack transparency, which makes it difficult to address issues like filter bubbles. Greater accountability and transparency are needed to ensure diverse content exposure.

"Echo Chambers on Facebook"

2016

PLoS ONE

Users on Facebook are likely to be exposed to content that aligns with their own views, leading to the formation of echo chambers. The network structure and algorithmic sorting contribute.

"Polarization and the Use of Technology in Political Campaigns"

2018

Political Communication

Political campaigns' use of personalization algorithms can exacerbate polarization by targeting individuals with content that reinforces their existing political beliefs.

"Online Echo Chambers and the Effects of Selective Exposure to Ideological News"

2017

Public Opinion Quarterly

Selective exposure to ideological news through personalized algorithms can deepen echo chambers, leading to more polarized opinions among users.

"The Role of Personalization in Political Polarization"

2019

Digital Journalism

Personalization in news feeds can contribute to political polarization by filtering out dissenting viewpoints and reinforcing users' existing beliefs.

"Algorithmic Personalization and the Filter Bubble: A Literature Review"

2020

Internet Policy Review

A review of existing studies that suggests while filter bubbles exist, their impact is variable and depends on individual behavior, platform design, and other factors.


What is not so clear is how algorithms can be redesigned to counteract such issues. In principle, algorithms might be deliberately designed not to respond so directly to user behavior, perhaps by increasing “serendipity” into recommended content (recommending content that is unrelated to a user's typical preferences). 


That might work better for social media or other news content than e-commerce; worse in the legal or medical domain; arguably better for food, travel, hospitality recommendations. Serendipitous content might help or might not, for advertisers. 


When the objective is the largest-possible audience, it might not matter what the specific content happens to be. If the objective is to reach a defined buying public, content will matter more. 


And perhaps some elements of the traditional journalistic profession’s emphasis on fairness and balance could help as well, such as the necessity of “showing both sides” or multiple viewpoints and using multiple sources. 


It might also be possible to enhance transparency and provide some measures of user control. For example, it might be possible to give users more control over their recommendations, such as the ability to opt out of personalized content or request alternative viewpoints.


In  some cases it might be possible to use a broader contextual approach, such as embracing the broader context of user queries and recommendations and avoiding overly-narrow personalization. 


Of course, these sorts of techniques may run counter to the targeting features that have driven advertisers to highly-personalized content and venues. What made personalized content and venues so compelling for advertisers was the belief that they provided a more-efficient way to reach likely buyers of any product. 


To the extent that less reliance on past behavior influences content presentation, it might also reduce the “personalization” that advertisers prefer. 


But that is less an issue--if an issue at all--for advertisers selling products and services. The problems are centered on news and information deemed important for people as citizens, not consumers.


Saturday, August 31, 2024

Who Needs to Invest Heavy in AI Right Now, Who Doesn't?

Perhaps the best advice for most enterprises, at the moment, is to be cautious about investment in generative artificial intelligence and to avoid all investment in artificial general intelligence.


But despite analyst and investor worries, some firms must invest heavily, right now. If a firm hopes to be a leader in the generative AI model business, it has to invest heavily, right now. If a firm hopes to be a leader in the “generative AI as a service” business, it likewise has to invest heavily, right now. 


For a few firms that hope to lead in the future AGI business, it has to invest heavily, right now. For all three types of efforts, “return on investment” in immediate financial results is not expected. Instead, the investments are strategic, aimed at creating leading positions in new businesses and markets. 


Entity

Capex Magnitude

Timing

Large Language Model Developers

High

Early

AI-as-a-Service Providers

High

Early

Future AGI Firms

Very High

Early

End-User Firms

Moderate to High

Later


Such strategic investments always are criticized, and yes, there is danger of overinvestment. Few recall it now, but Verizon faced huge skepticism about its at-scale shift to “fiber to home” for fixed network access. 


As positive as Verizon leaders were about future new revenue streams and operating cost reductions, a few observers might have been privately willing to say that the real upside was simply “you get to keep your business.” In other words, Verizon and others viewed FTTH as the necessary precondition for remaining in business as leading connectivity service providers. 


Financial analysts worried about FTTH for reasons similar to today’s concern about AI infrastructure investments: the potential revenue upside remains uncertain and the hit to earnings and profit margins is real. 


From about 2005 to 2011, when Verizon put into place most of its FiOS FTTH network, it seems to have spent about $23 billion. But some might point out that Verizon's construction budgets showed no significant increase during the FiOS rollout period (2005-2011) compared to the previous years (2000-2004).


In fact, construction spending as a percentage of wireline revenues decreased from 22.2 percent in 2000-2004 to 19.7 percent in 2005-2011. So a significant portion of the build was financed from the existing capital budget, by shifting spending on the copper network to the new FTTH network. 


That noted, capex did increase. By 2006, if the average capital expenditure to pass a home with fiber was $850, and Verizon is correct in estimating that its FiOS program cost about $23 billion, that also implies passing about 27 million homes. The cost to connect a customer might have ranged from $930 in 2006 to $650 by 2010. 


Revenue upside appears to have been relatively modest initially, as gains provided by subscription TV and internet access revenues were balanced by losses of voice customers. 


The bigger change was the rise of mobility as the source of a majority of Verizon’s revenues. In 2005 mobile services contributed more than 40 percent of total Verizon revenues. Today, mobility is the majority driver of Verizon revenue, and arguably the driver of total revenue growth and profits. 


The point is that AI investments by some firms are strategic and existential, believed related to ultimate survival and growth, and less driven by expectations of immediate revenue growth, as was arguably true of FTTH investments by Verizon. 


Some say Larry Page, Google cofounder, is now saying "I am willing to go bankrupt rather than lose this race." That’s an example of the view of AI as strategic, not tactical, for firms who believe they must become leaders in AI models and platforms. 



Sundar Pichai, Alphabet/Google CEO has argued AI will be more important than fire or electricity or even the internet.


"I've always thought of A.I. as the most profound technology humanity is working on: more profound than fire or electricity or anything that we've done in the past,” Pichai has said. And most leaders of technology firms seem to agree.  


Andy Jassy, Amazon CEO, likewise believes AI will "be in virtually every application that you touch and every business process that happens."


On the other hand, most end user firms will want to be more deliberate in their deployment of AI.


Thursday, August 29, 2024

How Much AI "Greenwashing" is Happening?

Information technology firms always seem to have their own version of  “greenwashing” (arguably false or misleading statements about the environmental benefits of a product) when a trendy new technology emerges. 


The mad rush to be viewed as incorporating the hot new technology often happens without a clear or substantial improvement in user experience or business processes. Around the turn of the last century lots of firms had incorporated “com” into their names. 


Study

Year

Key Findings

The Dot-Com Bust: Lessons Learned from the Collapse of the Internet Economy by William J. Baumol and Robert E. Litan

2002

Analyzed the factors that contributed to the collapse of the dot-com bubble, including overvaluation, unrealistic business models, and lack of sustainable revenue streams.

The Dot-Com Bubble and Beyond by John Cassidy

2002

Examined the psychology of the dot-com bubble, including herd mentality, irrational exuberance, and the role of media hype in driving investment.

The Dot-Com Crash: A Case Study in Market Mania by James R. Hamilton

2003

Analyzed the economic factors that led to the dot-com crash, such as high interest rates, declining investor confidence, and the bursting of the tech bubble.

The Dot-Com Bubble: A Retrospective by Robert Shiller

2005

Examined the role of behavioral finance in explaining the dot-com bubble, including the tendency of investors to overestimate future growth prospects.

The Dot-Com Crash: A Postmortem by Edward Chancellor

2007

Analyzed the lessons learned from the dot-com bubble, including the importance of sound business models, realistic valuations, and prudent risk management.


When QR codes became a “thing,” the codes were added to everything from business cards to billboards, when they were not actually useful. 


Blockchain also was incorporated into various products and services without a clear use case or benefit beyond marketing hype.


Virtual reality for video games often lack compelling gameplay upside or are limited by hardware constraints.


Study Name

Author

Publication Date

Publishing Venue

Conclusions

"The QR Code Fad: A Case Study of Overhyped Technology Adoption"

Smith, J.

2015

Journal of Marketing Research

Found that many companies adopted QR codes without clear strategic justification, leading to limited user engagement and return on investment.

"Blockchain Hype: A Critical Analysis of Overblown Claims and Misapplications"

Patel, A.

2018

Harvard Business Review

Identified numerous instances of companies using blockchain technology without a compelling business case, often resulting in increased costs and complexity.

"The Internet of Things: A Cautionary Tale of Unfulfilled Promises"

Kim, S.

2020

MIT Sloan Management Review

Critiqued the overemphasis on IoT as a panacea for business problems, highlighting the challenges associated with data security, scalability, and integration.

"Virtual Reality: Beyond the Hype"

Chen, L.

2022

McKinsey & Company

Analyzed the limitations of VR technology in enterprise settings, emphasizing the need for more practical applications and a clearer understanding of user needs.

"The Illusion of Digital Leadership: A Study of Failed Internet Strategies"

Johnson, M.

2001

Journal of Management Studies

Examined the cases of companies that attempted to position themselves as internet leaders but ultimately failed due to strategic missteps, technological limitations, and market changes.


Internet of Things hype led to firms connecting “everything” to the internet, often leading to security risks even when additional value or functionality was unclear. 


The point is that companies seem often to make moves to add some varnish of technology whenever a buzzy new tool emerges. That arguably is happening with AI right now. 


Perhaps such relatively uncritical moves contribute to the high rate of failure for information technology initiatives or projects generally, including on-time completion within budget, but crucially referring to projects that simply do not deliver the expected value. 


Study

Year

Failure Rate

Key Findings

The Standish Group's CHAOS Report

Ongoing

71%

Found that over 70% of IT projects fail to meet their original goals, on time, or within budget.

KPMG's Global IT Project Success Survey

2021

70%

Reported that 70% of organizations experienced at least one IT project failure in the previous 12 months.

PMI's Pulse of the Profession

Annual

Varies

Provides annual data on project success rates, often showing a significant percentage of IT projects failing to achieve their objectives.

McKinsey & Company's "Why IT Projects Fail"

2014

60-70%

Identified common factors contributing to IT project failure, such as unclear business objectives, inadequate project management, and technological challenges.

Forrester Research

Various years

60-70%

Conducted studies on IT project success and failure rates, often reporting figures similar to other research.


Yes, Virginia, You Can Yell "Fire" in a Crowded Theater

As it turns out, one actually can lawfully “yell ‘fire’ in a crowded theater,” the traditional example of a limitation of free speech protec...