Can Artificial Intelligence Make Sense of it?

Governments around the world and companies are turning to automated artificial intelligence tools to make sense of what people posting on social media, for everything ranging from hate speech detection to criminal investigations, taking down hate speech content or extremist’s propaganda, automated technology is expected to accomplish on a massive scale the kind of analysis that humans can achieve on a lesser scale.

Unfortunately today’s available tools for automating social media content analysis have limited ability to parse the real meaning of human communication, or correctly detect the intentions or motivation of the speaker. Policymakers need to understand these limitations of the system before endorsing or adopting automated content analysis tools. If proper frameworks and laws are not enacted that protect the freedom of expression these tools can facilitate draconian censorship and biased enforcement of laws of platforms and terms of service.

Automated social media content analysis carry the risk of further marginalizing and censor groups that is already facing discrimination on the basis of ethnicity, gender or geography. Many commercially available natural language processing tools are only effective for English language text. Reliance on these tools is likely to create harmful outcomes for non-English speakers. Non-English text is more likely to be misinterpreted by these tools, possibly creating more unwarranted censorship or suspicion of speakers of languages other than English.

When social media platforms or governments adopt automated content analysis tools, the algorithms behind the tools can become the de facto rules for enforcing a web site’s terms of service or a country or region’s laws. The disparate enforcement of laws or terms of service by biased algorithms that disproportionately censor people of color, women, and other marginalized groups raises obvious civil and human rights concerns which needs to be addressed through an oversight committee. But the problem with existing structure of the internet governance is that countries like Pakistan and many others in Global South have very little say when these tools are tested, implemented and modified on southern citizens. Even the trials results are kept confidential and no transparency is provided over the de facto use of these AI tools which are being used to shape the discourse many in the developing world fail to understand. Democracy was once under threat by Communists, Islamists and now it’s seems the biggest threat will come from such tools being used on the internet. We all need to work together, collaborate and share our experiences to contribute to society or risk of becoming marginalized by AI based automated tools.

Leave a Reply

Your email address will not be published. Required fields are marked *