Thursday, November 2, 2023

If Your Business Generates Content, Generative AI Almost Certainly Can Help

Nobody should be surprised by the results of a study of consultant work using generative AI, which suggests “consultants using AI were significantly more productive (they completed 12.2 percent more tasks on average, and completed task 25.1 percent more quickly), and produced significantly higher quality results (more than 40 percent higher quality compared to a control group).”


Generative AI is designed to aid content creation, and consultants at firms such Boston Consulting Group, a global management consulting firm, are required as part of their work to produce content, including advice, for their clients. 


Mirroring some other early studies, the study, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” suggests that GenAI especially increases performance of consultants considered to be below average. 


“Those below the average performance threshold (saw performance) increasing by 43 percent and those above increasing by 17 percent,” the authors argue. That might also not come as a surprise. 


GenAI arguably provides its greatest value when users are less expert, have less domain knowledge, and might take longer to identify, summarize and apply existing domain knowledge to the context of a particular engagement. 


It might also not come as a surprise that GenAI seemed to be most useful for tasks “within the frontier” of tasks GenAI is known to be good at. “For a task selected to be outside the frontier, (specifically tasks that GenAI is known not to perform well at the moment,) however, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI,” the authors say. 


The key observation, though, is that “tasks that appear to be of similar difficulty may either be performed better or worse by humans using AI,” the authors note. And there are indications that worse performance by GenAI models occurs when the models are asked to produce content that is known to require skills GentAI does not yet perform well. No surprise there. 


It is worth noting that the “Hawthorne effect” might also be at work. 


The Hawthorne Effect is the phenomenon that individuals alter their behavior when they are aware of being observed. It is named after the Hawthorne Works, a Western Electric factory in Chicago where a series of experiments were conducted in the 1920s and 1930s to study the effects of working conditions on productivity.


In the Hawthorne experiments, researchers found that workers' productivity increased regardless of the changes they made to working conditions, such as lighting, break times, and work hours. 


The researchers eventually concluded that the increase in productivity was due to the fact that the workers were aware of being observed and wanted to perform well.


The point is that we must evaluate such GenAI studies with the possibility that subject performance might be, to some extent, independent of the tools and work scenarios studied. 


No comments:

Have LLMs Hit an Improvement Wall, or Not?

Some might argue it is way too early to worry about a slowdown in large language model performance improvement rates . But some already voic...