Friday, April 5, 2024

Can Decentralized AI Succeed?

Decentralized artificial intelligence refers to a different approach to architecting AI models, analogous in some ways to the “Web3” effort to create a decentralized web that is not controlled or led by a few large firms. Some might also liken decentralized AI to the concept of “peer-to-peer” computing, which likewise uses a distributed network of physical computation. 


Decentralized AI aims to distribute the physical basis of the AI platform across a network of computers, and in some cases also distribute the logical platform of data sources upon which the models are built.


Decentralized AI proponents believe it promotes transparency in how AI models are trained and used and might reduce bias if diverse datasets from various sources are used. This might not be a particularly strong advantage, as all models are trained on a wide variety of data sources. 


Some believe there are data privacy or user monetization advantages. Federated learning techniques might allow users to keep their data private while still contributing to the training of AI models.


By distributing AI models across a network, it should be more difficult for any single entity to control or censor them. Lots of startups, not all of which will survive, have emerged. 


Ocean Protocol (OCEAN) provides a decentralized marketplace for data and AI models. Users can buy, sell, and share data securely while ensuring data privacy. 


SingularityNET also aims to create a global marketplace for AI services and tools built on a blockchain platform. Fetch is another blockchain-based AI effort. 


Numerai is  building a decentralized hedge fund powered by a community of AI developers and data scientists. 


Federated learning and Secure Multi-Party Computation (SMPC) underpins Oasis Labs; Federated AI and a group within Intel working on federated learning techniques and privacy-preserving AI.


Firms including Filecoin (storage network); Livepeer (decentralized video streaming) and Theta Network (decentralized video delivery) are building more application or use-case-specific forms of decentralized AI infrastructure.


Personal AI aims to build personal assistants. Vana wants to create a way for Reddit users to contribute their data. MyShell wants to create personalized chatbots. 


As always, it is hard to tell whether the alternative approaches will work at scale. In fact, the historical development of computing suggests centralized approaches have won. Some might view the shift to personal computers as a form of decentralization, but without limiting the emergence of a few dominant platforms. 


Likewise, distributed email, peer-to-peer music sharing networks were forms of decentralization. But cloud computing swung the pendulum back to centralization. Blockchain is presently the platform most believe could affect decentralization efforts. 


But how many of us actually believe the “rule of three” will not eventually emerge for generative AI and other AI models? Eventually, markets operating at scale seem always led by a small number of firms. 


“A stable competitive market never has more than three significant competitors, the largest of which has no more than four times the market share of the smallest,” BCG founder Bruce Henderson said in 1976. 


"A stable competitive market never has more than three significant competitors, the largest of which has no more than four times the market share of the smallest,” Henderson argued. Sometimes known as “the rule of three,” he argued that stable and competitive industries will have no more than three significant competitors, with market share ratios around 4:2:1.


Whether decentralized AI can succeed, in that sense, is the question. 


No comments:

Study Suggests AI Has Little Correlation With Long-Term Outcomes

A study by economists IƱaki Aldasoro , Sebastian Doerr , Leonardo Gambacorta and Daniel Rees suggests that an industry's direct expos...