Wednesday, March 31, 2021

Algorithmic Bias and Algorithmic Choice

Some might agree that algorithmic choice is a reasonable way to deal with fake news or false information on social media sites. Algorithmic choice is inherent in sorting, ranking and targeting of content. To use a simple example, a search engine returning results for any user search has to rank order the items.


Algorithmic choice is essential for online and content businesses that try to tailor information for individuals based on their behavior or stated preferences. And most of us would likely agree that neutral curation, without obvious or intentional bias, is the preferred way of culling and then presenting information to users.


That is a growing issue for hyperscale social media firms, as they face mounting objections to the neutrality of their curation algorithms and practices. It is a delicate issue, to be sure. Decades ago, online sites operated with a loose "community standards" approach that relied on common courtesy and manners.


Today's hyperscale social media seems intentionally to stoke outrageous behavior and dissemination of arguably false or untrue information. Some refer to this as disinformation, the deliberate spreading of known-to-be-untrue facts, with the intention to deceive.


This is not the same thing as mere difference of opinion, the expression of “an idea I abhor” or “an idea I disagree with.” Disinformation is a matter of manipulation and deception. The latter is merely an expressed difference of opinion.


Some might argue that allowing more personalized control by users will help alleviate the problem of false or fake information. Perhaps that can help, somewhat, to the extent that people can block items and content they disagree with. 


That does not address the broader problem of potential bias in the creation and application of algorithmic systems, including rules about what content infringes on community standards, is “untrue” or “misleading” or “false.” 


It is akin to the odd notion of subjective "truth," as in "my truth" and "your truth." If something is objectively "true," my subjective opinion about it matters not.


In that sense, user-defined algorithms do not "solve" the problem of fake news and false information. The application of such algorithms by users only prevents them from exposure to ideas they do not prefer. It is not the same as designing search or culling mechanisms in a neutral and objective way, to the extent possible.


Less charitably, user algorithmic control is a way of evading responsibility for neutral curation by the application provider.


The bigger problem is that any algorithm has to be designed to filter out “truth” from “falsehood.” And that is a judgment call. “Ideas we all disagree with” is a form of bias that seems to be put to work deliberately by many social media. 


Aside from the observation that “ideas” might best be determined to be more true or more false only when there is open and free debate about those ideas, algorithms are biased when certain ideas are deemed to be “false,” even when they clearly are matters of political, cultural, social, economic, scientific or moral and religious import where we all know people disagree. 


And that means algorithm designers must make human judgments about what is “true” and what is “false.” It is an inherently biased process to the extent that algorithm designers are not aware of their own biases. 


And that leads to banning the expression of ideas, not because they are forms of disinformation but simply because the ideas themselves are deemed "untrue" or "dangerous." The issue is that separating the "untrue" from the merely "different" involves choice.




No comments:

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...