Saturday, October 11, 2025

AI Circular Investment and Systemic Risk

It’s too early to know whether circular artificial intelligence investments between chipmakers, hyperscalers and AI start-ups pose a systemic risk. But such deals, which resemble vendor financing in key respects, and largely centered around Nvidia, are proliferating. 


Circular investment in a value chain occurs when companies, which are sequentially linked in the production process (e.g., supplier, manufacturer, distributor), make reciprocal equity investments in each other. This creates a loop or web of ownership and capital flow among entities that are also bound by operational or commercial contracts.


That can foster deeper collaboration, secure supply, and potentially align long-term strategic goals. But, such practices also increase systemic risks, in a manner similar to the use of leverage. 


The tight interdependence means a financial or operational failure at one company can quickly propagate throughout the entire chain. 


If a key supplier (Company A) experiences a significant loss, the value of the manufacturer's (Company B's) investment in A drops. This loss on B's balance sheet can then trigger a financial stress that impairs B's ability to fulfill its obligation to the distributor (Company C), and so on.


The same is true for financial contagion risks, where a localized default or failure ripples through the ecosystem. 


Circular revenue is another problem. One entity makes an investment that is then used by the recipient to purchase products or services from the investor. 


That circular flow of capital inflates revenue but arguably produces zero or negligible net economic gain. Such deals also then inflate financial and equity values of the firms. 


source: Seeking Alpha 

Some recent circular deals in the artificial intelligence space include:

There might not be a problem now, but observers wonder if systemic risk is being created that could emerge later.


Friday, October 10, 2025

AI Isn't the Problem for Critical Thinking: People Are!

One frequently hears worries that use of artificial intelligence is going to diminish human critical thinking skills. The fear is at least partly correct, to the extent that critical thinking is said to involve the ability to evaluate sources and determine information reliability and truthfulness or bias.


And, at least for the moment, users often do not have access to the full range of sources or reasoning that any AI engine uses to derive an answer to a question.


But lots of us might counter that humans in real life often do not seem to use critical thinking all that much, to begin with.


If artificial intelligence emerges as a general-purpose technology that transforms work and the economy in major ways, then AI arguably will also change what we need from our education systems. 


If "learning how to learn" once meant mastering the acquisition, organization, and synthesis of information through traditional human effort, that focus must change if AI can handle most of those tasks, much as search now replaces “going to the library.”


In the age of AI, "learning how to learn" shifts from content mastery to process mastery and other skills such as vetting of sources, determining the reliability or bias of information


Old Focus (Pre-AI)

New Focus (AI Era)

Information Acquisition (How to find facts)

Information Curation and Vetting (How to judge AI's output, spot bias, and fact-check)

Knowledge Retention/Memory (How to memorize)

Prompt Engineering and Collaboration (How to effectively use AI as a thinking partner)

Routine Synthesis/Drafting (How to structure an essay)

Deep, Creative Problem-Solving (How to define novel problems and use AI to test solutions)

Following Established Procedures (Mastering a fixed curriculum)

Adaptability and Continuous Upskilling (Embracing a cycle of learning, unlearning, and relearning)


The core goal of human education is no longer to create a knowledge repository but to learn how to think critically. To a greater extent, AI handles the heavy lifting of information retrieval and drafting, freeing people to frame questions, critique answers and create what is new. 


The most valuable skill is no longer solving a textbook problem, such as how to optimize a supply chain, but evaluating novel, ambiguous, and human-centric problems such as environmental, social costs and other externalities that the AI models overlooked.


Old Assignment Type

Possible AI Assignment Type

Write an informative research paper on climate change.

Use AI to generate three competing solutions to a local environmental problem. Evaluate their feasibility, synthesize a new solution combining the best ideas, and present a policy proposal to the town council.

Complete a worksheet of math equations.

Use a programming AI to create a simple calculator app that solves a specific type of complex equation, then debug the AI's code to understand the math principles.

Write a short story about a historical figure.

Use an image-generation AI and a text AI to create a multi-modal presentation (images, text, video script) that argues a counter-factual history. The student's grade is on the coherence and originality of the counter-factual argument.


Many will argue the capacity for empathy, leadership, collaboration, and building trust remains essential for nearly all high-value work and social functioning, so that might mean educating students on the importance of empathetic communication skills and practices, for example. 


On the other hand, content mastery still has value, in particular for critical thinking. The idea that some content mastery is still required for sophisticated prompt engineering is widely accepted, as the quality of the AI's output is limited by the quality of the human's input. 


In other words, domain-specific knowledge still matters.


Beyond that, if people do not wish to think, AI won't make much of a difference.


OpenAI as an App Provider?

What is the future of enterprise software as artificial intelligence continues to advance, now perhaps shifting into additional roles within enterprise software value chains?


The proximate cause is OpenAI making a direct foray into the application software market, perhaps shifting OpenAI from model provider to a direct competitor in applications including customer relationship management, marketing automation, and sales enablement.


OpenAI now offers its own suite of SaaS applications, including the "Inbound Sales Assistant" and the "GTM Assistant" (Go-To-Market Assistant). Other tools covering front office, middle office, and back office software categories are coming, supporting sales enablement, inbound marketing assistance, customer support, product analytics and finance applications.


All of those efforts, and others certain to come, are part of the evolution of generative AI from chatbot to agent


The key issue is how well OpenAI's AI models might enable entities to "build their own custom CRM" or integrate AI into existing CRM systems.


At a high level, this is an example of OpenAI moving into additional roles across a value chain, or “up the stack” in terms of functions. 


Such “AI-native” enterprise software obviously poses a threat to current enterprise software leaders. 


source: Bain

 

At a high level, some argue that, at some point, it might not be necessary to use a specific application at all to accomplish a business task. Think of the concept as AI becoming the "gateway to business knowledge." 


It is possible the enterprise software industry is in a shift from traditional software applications to a model where AI agents handle routine, end-to-end tasks autonomously, essentially bypassing the need to use specific enterprise software for such purposes. 


Instead of opening an HR system to file a vacation request, you tell your agent: "Book vacation Oct 5–10, notify my manager, update the team calendar, and reassign tasks." The agent does it all, with no app switching or forms to fill out. 


Among other practical impacts, such mechanisms call into question the traditional license-based enterprise software models, in terms of magnitude if not role elimination. 


source: Bain


Whether there is an AI “financial bubble” or not, the reason for the investment is obvious. AI might be the most-impactful new technology since the internet, with equally-disruptive effects on many industries and firms. 


Enterprise software is but one example of the process at work. 


Tuesday, October 7, 2025

Uh Oh: Big AI Circular Deals

So now we have a new wrinkle to add to the “potential AI bubble” thesis: circular deals between AI infra suppliers (chips and compute platforms) and models.

It has been 25 years since that “problem” was evident. But that is a long enough time that new investors will not have lived through the aftermath of a market meltdown such circular deals contributed to creating.

Investors during the dot-com era were burned, in part, by vendor financing and circular deals where firms passed funds back and forth to prop up their businesses. One example was capacity supplier A buying X amount of capacity from carrier B, while B purchased the same amount of capacity from A.

Under “normal” circumstances that is just business as usual, since no single capacity supplier has a network that covers 100 percent of the locations its customers might need to reach.

The problem was simply that both A and B were able to show some “revenue” on their books (capacity sales) that were essentially fictitious. The net impact was zero.

Such “circular deals” now are a feature of artificial intelligence investments. Nvidia’s partnership with OpenAI, wherein Nvidia can invest up to $100 billion in the firm over time, is one example.

So are deals such as the deal between OpenAI, Oracle, and Nvidia to support the "Stargate" data center project.

This deal involves OpenAI securing computing power from Oracle, which in turn purchases billions of dollars worth of Nvidia's AI chips, while Nvidia invests directly in OpenAI.



source: Liberty's Highlights

Most recently, OpenAI struck a partnership with Advanced Micro Devices AMD in which OpenAI gets warrants in AMD’s common shares representing potentially 10-percent ownership of AMD, while OpenAI pledges to buy six gigawatts worth of AMD processors.

Nvidia’s partnership and investment with CoreWeave CRWV also is circular. CoreWeave funded some debt using Nvidia’s GPUs as collateral. Nvidia owns shares of CoreWeave. Nvidia also struck a deal to use any of CoreWeave’s excess capacity through 2032.

The AI circular deals issue creates a potential problem: an artificial ecosystem where large AI firms receive investments from major suppliers, who in turn receive massive orders from the AI firm, masking the true source of demand and inflating stock prices accordingly.

That isn’t to say there is an immediate problem, but the risk increases if revenue growth does not match the valuations currently used to support the deals.

Monday, October 6, 2025

Will AI Note-Taking Apps Reduce Listener Comprehension?

It’s probably way too early to assess the impact of note-taking apps on listener comprehension and recall. So far, studies of the matter seem to have focused on the difference between manual note taking (long hand) and mechanical (typing) note taking. 


That is perhaps a different comparison than either manual or mechanical notetaking and the use of note-taking apps, which is in the immediate sense the same as “not taking notes,” in terms of the effects on listening or comprehension. 


To the extent that manual note taking enhances listening, automated note taking apps might conceivably lead to less-deep cognitive processing or less conceptual understanding. The reason is that studies on the relationship between manual (longhand) note taking and listening comprehension and memory generally suggest that the slower, manual process encourages deeper cognitive processing


The physical act of writing is thought to force the listener to process and select important information rather than simply transcribing verbatim, which also is thought to encourage memory and understanding. 


Since the notes are transcribed, verbatim, the impact on learning, long term, might be improved, it is possible to suggest. 


Study (Year)

Medium/Context

Key Findings Related to Manual Note-taking

Source Link

Mueller & Oppenheimer (2014)

Longhand vs. Laptop Note-taking in Lectures (College Students)

Longhand notes led to better performance on conceptual questions. Laptop note-takers took more notes with greater verbatim overlap, suggesting shallower processing (transcribing without synthesis) was detrimental to conceptual learning.

https://pubmed.ncbi.nlm.nih.gov/24760141/

Flanigan et al. (Meta-Analysis, 2023)

Typed vs. Handwritten Lecture Notes (24 separate studies)

Handwritten notes led to higher academic achievement (Hedges' g = 0.248) despite typing notes resulting in higher volume. Concluded handwritten notes are more useful for studying and committing to memory.

https://scholars.georgiasouthern.edu/en/publications/typed-versus-handwritten-lecture-notes-and-college-student-achiev 

Özbay (2013)

Note-taking vs. No Note-taking while Listening (Higher Education)

Note-taking was found to have an impact on comprehension and recall in lectures, rendering listeners more active and engaging them in higher-order cognitive skills (evaluation, interpretation, summarizing).

https://www.researchgate.net/publication/271025457_The_impact_of_note-taking_while_listening_on_listening_comprehension_in_a_higher_education_context

Kusumi, Mochizuki, & Van Der Meer (2024)

Handwriting vs. Typewriting (EEG Study)

Handwriting (manual) showed far more elaborate brain connectivity patterns (theta/alpha coherence) than typewriting, which is known to be crucial for memory formation and learning.

https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1219945/full 

Katims & Piolat (2023)

Handwritten Notes and Listening Comprehension (L2 English Learners)

Found that the quality of handwritten notes, specifically the number of 'information units' and a higher 'efficiency ratio' (relevant info/total words), positively correlated to listening comprehension test scores. Total word count was not a factor.

https://publications.coventry.ac.uk/index.php/joaw/article/download/838/983 

Dunkel, Mishra, & Berliner (1989)

Note-taking and L1 vs. L2 Listening

For L1 (native language) listening, having notes available during the test was the most beneficial aspect, not the act of taking notes alone. For L2 (second language) listening, the impact of note-taking on overall performance was often non-significant.

https://www.govtilr.org/Publications/Notetaking.pdf 


It’s too early to assess the impact of automated note-taking apps, which will require virtually no effort on the part of the listener (either manual or mechanical notetaking). But one suspects the impact will be greater on listening attentiveness than on long-term memory. 


If the core function of manual note-taking is the encoding effect (deep processing required for selection and paraphrasing), then automated apps bypass this active filtering process, leading to a shallower level of processing during the listening event, we might guess.


Since the mental effort required to transform a spoken idea into a concise, manually written note forces the listener to analyze and synthesize the information, the absence of a need to do so could, or should, lead to less thinking as a part of taking the notes.


AI is Solow Paradox at Work

An analysis of 4,500 work-related artificial intelligence use cases suggests we are only in the very-early stages of applying AI at work a...