Automated data processing techniques have started to rule over the internet. The boundaries with humans and automated decision making systems are blurring with little oversight of who is in charge when we search for something online or Facebook newsfeeds are being broadcasted to us based on personalization and data marketing techniques. The increasing use of algorithms in decision making processes over the internet is creating new forms of challenges for the society that impact citizen’s human rights globally. Safeguarding or protecting rights in the digital age under the direct or in direct influence of robots or automated systems will require a deeper understanding of the issue and responding on a global scale. Are societal inequalities merely replicated or amplified through automated data processing techniques? Given that most algorithms are designed by private companies with seeking economic benefits human rights issues are rarely addressed in the design processes.
Issues arising from use of algorithms as part of the decision making process are manifold and complex. At the same time, the debate about algorithms and their possible consequences for individuals, groups and societies is at an early stage. To better understand them we need to decode the social constructs around them. Following a string of terrorist attacks in the US and Europe, politicians called for online social media platforms to use their algorithms to identify potential terrorists and to take action accordingly. Such approaches may be highly prejudicial in terms of ethnic and racial backgrounds and therefore require scrupulous oversight and appropriate safeguards to protect individual’s human rights. Similarly, automatic processing of personal data for individual profiling may lead to discrimination or decisions that otherwise have the potential to affect the enjoyment of 18 human rights, including economic, social and cultural rights.
The algorithmic predictions of user preferences deployed by social media platforms guide not only what advertisements individuals might see, but they also personalize search results and dictate the way how social media feeds, including newsfeeds, are arranged. Further content removal on social media platforms often takes place through semi-automated or automated processes. Algorithms are widely used for content filtering and content removal processes including on social media platforms, directly impacting on the freedom of expression and raising rule of law concerns (questions of legality, legitimacy and proportionality). While large social media platforms like Google or Facebook have frequently claimed that human beings remove all content large parts of the process are automated and based on semi-automated processes. As governments around the world advocated for the use of automated detection 28 and removal of extremist videos and images. Additionally, there have been proposals to modify search algorithms in order to “hide” websites that would incite and support extremism. The automated filtering mechanism for extremist videos has been adopted by Facebook and YouTube for videos. However, no information has been released about the process or about the criteria adopted to establish which videos are ”extremist” or show “clearly illegal content.
Law enforcement responsibilities have been designated to private companies, for creating the risk of excessive interference with the right to freedom of expression, and for their lack of compliance with the principles of legality, proportionality, and due process