The field of artificial intelligence (AI) in Pakistan is evolving rapidly and poised to grow significantly over the coming decade. AI is a technology that is creating new opportunities in education, promoting equality and freedom of expression and access to information. The advancements in data collection, processing, increased computing power and low costs of storage have resulted in numerous AI start-ups offering insights into the control of new diseases, identifying patterns of human interactions, setting up smart electrical systems and intelligent irrigation systems, powering smart cities, and analysing shopping data, apart from the not-so-popular uses of AI such as identifying tax evaders and policing. AI-powered applications are set to play a very critical role in the cyber ecosystem in Pakistan, which will also introduce significant risks and challenges by amplifying existing bias, discrimination and ethical issues in its governance. AI-driven algorithms and applications are growing at a rapid pace, but most of the development is happening in the private sector, where little consideration or thought is given to human rights principles when designing computer code carrying instructions to translate data into conclusions, information or outputs. The potential impact on human rights is worsened given that no framework is available in Pakistan that regulates the application of AI from a human rights perspective.
The aim of this maiden report is to look at the human rights legal framework for AI in Pakistan and to propose a theoretical framing for thinking about the obligations of government and responsibilities of private companies in the country, intersecting human rights with the expanding technological capabilities of AI systems.