Tuesday, April 9, 2024

Can AI Make Social Media More Pleasant?

Many of us would say we avoid social media because of the amount of content that seems impolite, not respectful; immature or worse. Perhaps no amount of content moderation is going to stop some people from behaving in ways that are rude; uncivil and lacking in grace. -


Among the ways generative artificial intelligence could help reduce the amount of hostile and uncivil social media content, Generative AI models can be trained to identify patterns of language commonly used in hateful, abusive, or harassing content. This can help flag such posts for further human review by moderators.


The caveat is that some people seem to believe ideas themselves are inherently abusive, or “violent” or “threatening” when they might arguably simply reflect a difference of opinion. So “protection” for some might seem to be censorship by others. 


Assuming that sort of bias can be largely avoided (a big “if”), then perhaps AI can go beyond simple keyword matching and analyze the overall sentiment and context of a post. This can help identify nuanced forms of negativity or attacks that might bypass simpler filters, again assuming we are dealing with some agreed-upon sense of the difference between free speech; different ideas and bad behavior. 


AI might be able to analyze conversations and identify situations where a disagreement is escalating towards something more than hostility and bad manners, and the AI might suggest alternative phrasings or ways to reframe arguments to promote more civil discourse and respectful dialogue that some might call simple good manners and politeness. 


On a different level, Gen AI might be used to identify and showcase positive and constructive interactions on a platform, creating a more positive atmosphere and nudge users towards more civil behavior.


Or perhaps AI could provide users with personalized prompts to encourage them to reconsider potentially offensive language before posting.


Of course, value judgments always are involved. Some might consider certain subjects or keywords “offensive,” while others might consider those subjects or keywords merely descriptive. And historical context might matter as well. Some ideas or words might have been historically common in the past, but considered inappropriate in a modern context. 


Language also can be nuanced, and sarcasm or humor might be misinterpreted by AI. Comedians, almost by definition, make fun of lots of things. 


And there's a delicate balance between filtering harmful content and stifling free speech. Just because some people do not like some ideas does not mean they are “hate speech” or somehow “violence” or “hostility.” 


No comments:

Study Suggests AI Has Little Correlation With Long-Term Outcomes

A study by economists IƱaki Aldasoro , Sebastian Doerr , Leonardo Gambacorta and Daniel Rees suggests that an industry's direct expos...