Thursday, November 2, 2023

If Your Business Generates Content, Generative AI Almost Certainly Can Help

Nobody should be surprised by the results of a study of consultant work using generative AI, which suggests “consultants using AI were significantly more productive (they completed 12.2 percent more tasks on average, and completed task 25.1 percent more quickly), and produced significantly higher quality results (more than 40 percent higher quality compared to a control group).”


Generative AI is designed to aid content creation, and consultants at firms such Boston Consulting Group, a global management consulting firm, are required as part of their work to produce content, including advice, for their clients. 


Mirroring some other early studies, the study, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” suggests that GenAI especially increases performance of consultants considered to be below average. 


“Those below the average performance threshold (saw performance) increasing by 43 percent and those above increasing by 17 percent,” the authors argue. That might also not come as a surprise. 


GenAI arguably provides its greatest value when users are less expert, have less domain knowledge, and might take longer to identify, summarize and apply existing domain knowledge to the context of a particular engagement. 


It might also not come as a surprise that GenAI seemed to be most useful for tasks “within the frontier” of tasks GenAI is known to be good at. “For a task selected to be outside the frontier, (specifically tasks that GenAI is known not to perform well at the moment,) however, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI,” the authors say. 


The key observation, though, is that “tasks that appear to be of similar difficulty may either be performed better or worse by humans using AI,” the authors note. And there are indications that worse performance by GenAI models occurs when the models are asked to produce content that is known to require skills GentAI does not yet perform well. No surprise there. 


It is worth noting that the “Hawthorne effect” might also be at work. 


The Hawthorne Effect is the phenomenon that individuals alter their behavior when they are aware of being observed. It is named after the Hawthorne Works, a Western Electric factory in Chicago where a series of experiments were conducted in the 1920s and 1930s to study the effects of working conditions on productivity.


In the Hawthorne experiments, researchers found that workers' productivity increased regardless of the changes they made to working conditions, such as lighting, break times, and work hours. 


The researchers eventually concluded that the increase in productivity was due to the fact that the workers were aware of being observed and wanted to perform well.


The point is that we must evaluate such GenAI studies with the possibility that subject performance might be, to some extent, independent of the tools and work scenarios studied. 


Wednesday, November 1, 2023

Content Design When People Have the Attention of a Goldfish

I have no attention span,” says singer Carrie Underwood. “I get bored so fast.”

All of us do, these days. So, getting and keeping your attention matters. And keeping you engaged is harder than ever.

Every post-conference survey tells us that many attendees are too busy to attend conference sessions.

“We’ve all got ADD now, short attention span and all that,” says actor Hugh Grant.

Ninety percent of millennial business professionals, 85 percent of Gen Xers, and 82 percent of baby boomers report they “shifted their focus away” from the speaker during the most recent presentation they saw live. It gets worse. It is not true that attention spans are just eight seconds among millennials, but the direction is right.55 percent of professionals say you must tell a great story to get their attention.

95 percent of professionals multitask during meetings
* Professionals in their 30s and 40s decide in eight seconds whether content is relevant
* Millennials expect ads to last only six seconds
Content for apps and web is fast-paced, so people are accustomed to shorter content

No one has time for what isn’t relevant, convenient, or compelling.

5G Has Brought Less Change Than Many Expected


The primary value still seems to be "more capacity" and "higher speeds." It's similar to the value of fiber to home or DOCSIS 4.0: more capacity; faster speed. In many ways, greater capacity is a "hygiene" issue: what one has to do to remain relevant in business and sustain the business.

So far, at least, there has been precious little adoption, at scale, of new use cases and features. The advantages of sensor device density or lower latency have yet to be embraced, for example. 

It might grate, but the value of 5G is that it supports a more-capable (faster) "dumb pipe." That is to be expected when a permissionless app architecture is in place.  

The Zero Touch, High Transaction Telco, Marc Halbfinger, Console Connect CEO

Tuesday, October 31, 2023

AI Regulation Seems to be on an "Ex Ante" Track, Not Ex Post Facto

The current rush to regulate AI appears to differ from prior efforts to regulate computing technology and especially the internet. 


As always is the case, regulators seek to balance consumer protection and innovation. With regard to the internet, however, regulators took a “wait and see” approach. This ex post facto approach allowed the internet to develop, with regulation happening only when issues were deemed to have arisen. 


Most major internet regulations took a decade to a few decades to develop.


Regulation

Date Passed

Years After Internet Commercialization

Communications Decency Act (CDA)

1996

10 years

Children's Online Privacy Protection Act (COPPA)

1998

12 years

Digital Millennium Copyright Act (DMCA)

1998

12 years

CAN-SPAM Act

2003

17 years

Do Not Track (DNT)

2011

25 years

General Data Protection Regulation (GDPR)

2016

30 years

California Consumer Privacy Act (CCPA)

2018

32 years


AI seems to be drawing scrutiny quite early in its development. Already, issues such as misinformation, malware and cybercrime have been raised about AI use. 


Issue

Earlier form of internet or computing governance

AI governance/regulation

Privacy

The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to protect the privacy of their customers' financial information.

AI systems can collect, store, and process vast amounts of personal data, which raises concerns about privacy.

Misuse

The Digital Millennium Copyright Act (DMCA) prohibits the circumvention of digital rights management (DRM) technologies.

AI systems can be used to create deep fakes and other forms of synthetic media, which could be used for malicious purposes.

Privacy and security

General Data Protection Regulation, California Consumer Privacy Act 

These regulations were developed to protect user privacy and security on the internet.

Loss of control

ICANN

This organization is responsible for overseeing the global DNS.


Many observers would suggest there is a much-higher level of unease about AI, compared to the internet, which might raise the specter of ex ante regulation that inadvertently slows development, which might be precisely what its backers want. 

Ex ante regulation is regulation that takes place before an event or activity occurs. It is designed to prevent problems from happening in the first place. 

Others might caution that ex post facto has traditionally been the case for internet regulation. 

Ex post facto regulation is regulation that takes place after an event or activity has already occurred. It is designed to punish or compensate for harm that has already been done.

One might say the difference in approach comes from fears that the harm or damage is too potentially too great to rely on ex post facto regulation. 

How Will Most Firms Recover AI Usage Costs?

Since most users of cloud-based software-as-a-service services already are accustomed to pricing based substantially on usage, generative AI might represent few new pricing issues. If their customers use more, they will pay more.


For cloud computing suppliers, there are at the moment perhaps fewer issues about how to charge for usage than about how to create the high-performance compute fabric AI in general and large language models in particular require. 


Entirely different issues confront most other firms that must figure out how to price AI capabilities incorporated into their products. For most firms, AI is a new cost that must be recovered somehow in retail prices charged to customers.


Uncertainty about levels of usage is one variable. But there is no uncertainty about product costs when AI features are based on usage of "cloud computing as a service." Inference operations are going to require cloud computing usage, each time an inference operation is invoked.


How to recover the costs of paying for cloud compute therefore is a new question to be answered.


For most firms that will want to use large language models (generative AI), the big issue is how to recover the cost of LLM features used by their customers. So far, the most-common models are:

  • AI is a feature of a higher-priced version of the existing product (higher-cost plan versus standard)

  • AI is a value-added feature with an extra flat-fee cost

  • AI is a feature of an existing product for which there is no direct incremental charge to the user (such as a search or social media or messaging user), but might eventually represent a higher cost to customers (advertisers or marketers buying advertising, for example)

  • AI is a no-charge feature of an existing product, but with usage limits (freemium)

  • AI is a new product with charges that are largely usage-based (GenAI “as a service” offered by infrastructure-as-a-service providers). 


And some software firms might use a few of those models. For example, Microsoft charges for its AI-assisted copilots, including those in Office and GitHub, with prices ranging from $5 to $40 per user per month. 


But some copilots are included with certain enterprise subscriptions, while a number of Microsoft's consumer AI services remain free for now.


Other software product suppliers also must grapple with how to recover costs of supporting AI features used by their customers.


Box includes AI features for business customers subscribed to its Enterprise-Plus tier and above. Each user will have access to 20 queries per month, with 2,000 additional queries available on a company level. Additional usage will require further payment.


Adobe is including  "generative credits" with its various Creative Cloud, Express and Firefly services.  Starting November 2023, Adobe will offer additional credits using a subscription plan, with plans starting at $4.99 per month for 100 credits. 


“Usage” seems to be the area where there is most danger for retailers, who must make key assumptions about the value of AI when embedded into core products, as well as the cost recovery mechanism when suppliers are not yet sure about how much their customers will use the AI features. 


The key danger will be underestimating usage, unless usage is part of the customer AI pricing formula. 


In a market where retail customers use their own hardware, that would not be an issue. 


But in a market reliant on cloud computing, where retail customers use the supplier’s cloud computing resources, usage really does matter, whenever the supplier is in turn paying a cloud services vendor for compute. 


A few hyperscale cloud computing firms (Microsoft, Google, Facebook) will be somewhat protected, as they can use their own infrastructure. But most enterprises will have to pay retail rates for computing services, so volume does matter. 


Although”compute as a service” suppliers are going to face customer pushback as AI compute loads and charges mount up, at least they tend to be protected as most of their services are usage based. Customers who use more, pay more. 


Businesses that buy “compute as a service” will have to take usage into account. 


Some of those “customer usage and customer pricing” issues might be reminiscent of issues connectivity providers faced in the past as core product “usage-based pricing” shifted. 


Though both flat-fee and usage-based pricing was common in the era where voice was the dominant product, flat fee has been the bigger trend for internet access, interconnection and transport. Within some limits, internet access, for example, tends to be “flat fee” based. 


That poses key issues as the volume of usage climbs, but revenue does not. One can see this in network capital investment, for example, where network architects must assume perpetual 20-percent (or higher) increases in usage every year. 


In some ways, suppliers that embed AI into their products are going to face similar problems. Though cloud computing suppliers will still largely be able to employ usage mechanisms, many retailers of other products are as-yet unclear about how much usage will eventually happen.


That, in turn, means they are as-yet unsure about long-term cost recovery mechanisms and retail AI pricing. 


Flat-fee pricing will be the simplest solution for the moment, and likely the least-objectionable method from a customer standpoint. Whether that continues to work so well in the future is the issue, if AI inference operations grow in volume as much as some might suspect. 


It will be difficult for most firms to sustain low flat-fee rates as volume escalates. The exceptions are those handful of firms that own their cloud compute infrastructure (Microsoft, Google, Facebook, Amazon, for example). 


Of course, some of those sorts of firms will be able to justify “no fee to use” as well, since they have commerce or advertising revenues supporting many of their core products. That is a luxury few firms will experience.


AI usage is going to be a big issue for most firms. So is the issue of how to recover costs related to supplying that usage.

Monday, October 30, 2023

Premature AI Regulation Could Slow its Development

Premature regulation of artificial intelligence--as compelling as it will seem to policymakers--might stifle or slow deployment and benefits, as some have argued has been the case for new technology in many other instances. But the evaluation of such rules often depends on whether it is “consumer advocates” or “industry advocates” who do the viewing. 


Always, “consumer protection” has been viewed as an easy win by lawmakers, who can say they are acting in defense of citizens. But outcomes sometimes are not what lawmakers envisioned. 


The Fairness Doctrine was supposed to support “differing views” on controversial matters. Instead, the rules mostly ensured that broadcasters would avoid such issues. Other rules relating to electronic media likewise have been touted as protections, but also were challenged on grounds of restricting freedom of expression. 


Advocates of network neutrality have argued that the Telecommunications Act of 1996, designed to promote innovation in communications, had unintended consequences for network neutrality, as the Act “might” have allowed internet access providers to block content, favor their own content or extract payments from app providers. 


Opponents of network neutrality say there is scant evidence that such abuses ever happened, even in the absence of rules. Even those who believe ISPs would behave badly if they could might also agree that robust competition seemingly prevents ISPs from acting in such a manner. 


Technology

Regulation

Impact

Motion pictures

The Hays Code (1930-1966)

The Hays Code was a self-censorship code that restricted the content of Hollywood films. The code was designed to uphold traditional moral values, but it also stifled creativity and innovation in the film industry.

Television

The Fairness Doctrine (1949-1987)

The Fairness Doctrine was a policy that required broadcasters to present opposing views on controversial issues of public importance. The doctrine was designed to ensure that the public had access to a variety of viewpoints, but it also stifled free speech and debate.

Video games

The Entertainment Software Rating Board (ESRB) (1994-present)

The ESRB is a self-regulatory body that rates video games based on their content. The ESRB was created in response to public concerns about the violence in video games. While the ESRB has been successful in reducing the amount of violence in video games, it has also been criticized for stifling creativity and innovation in the video game industry.

Internet

The Communications Decency Act (CDA) (1996-1997)

The CDA was a law that attempted to restrict access to indecent and obscene material on the internet. The law was challenged in court and ultimately struck down as unconstitutional. However, the CDA had a chilling effect on free speech online.

Blockchain

The Securities and Exchange Commission (SEC) (2017-present)

The SEC has been criticized for its regulation of the blockchain industry. The SEC has taken a number of actions against blockchain companies, including issuing cease-and-desist orders and filing lawsuits. The SEC's actions have created uncertainty in the blockchain industry and have made it difficult for blockchain companies to raise capital.

AI

The European Union's General Data Protection Regulation (GDPR)

The development of new AI technologies, such as facial recognition and voice assistants

Internet

The Telecommunications Act of 1996

Early internet development, including the growth of social media and streaming services

Radio

The Radio Act of 1927

Early radio broadcasting, including experimental and avant-garde programming


DIY and Licensed GenAI Patterns Will Continue

As always with software, firms are going to opt for a mix of "do it yourself" owned technology and licensed third party offerings....