In Pakistan, AI is slowly making its mark in sectors such as e-commerce and finance, following global patterns. Nevertheless, there remains a fear of AI potentially becoming a threat to its creators, as portrayed in films like Terminator. While these risks may appear remote, advocates for “AI safety” caution against scenarios where AI could exceed human intelligence and operate autonomously with objectives that may not align with human intentions.
In November 2023, global leaders convened at an AI safety summit to discuss possible threats. Critics, however, argue that attention should be directed towards present challenges like AI bias, disinformation, and the infringement of intellectual property and human rights. These issues already affect industries and individuals, especially in countries like Pakistan, where data and technology regulations are still in their infancy. The key challenge is to balance technological innovation with safety.
AI systems have encountered failures in practical applications. For instance, Google’s image-labelling AI wrongly identified black people as gorillas, and facial recognition technology has frequently misidentified people of color due to biased training data. In recruitment, AI has often shown preference for male candidates, and deepfakes are being used for malicious purposes, such as falsified political speeches. In Pakistan, these risks are exacerbated by the rise of social media, while legal battles from artists emphasize AI’s misuse of intellectual property.
Recently, experts stressed the importance of ensuring AI systems respect human rights, embrace diversity, and promote fairness. This guiding principle mandates a thorough review of how AI technologies are designed and implemented, ensuring they support equality rather than perpetuating or exacerbating existing biases.
“Despite the impressive progress of large language models (LLMs) in emulating human-like intelligence, these systems still face considerable flaws. Critical problems like hallucinations, lack of grounding in real-world contexts, unreliable reasoning, and opacity arise from the core architectures and training methods of these models. These are not just technical issues; they represent fundamental limitations that spark significant concerns regarding the safety, robustness, and true intelligence of AI systems,” said Jawad Raza, one of Corinium Global's Top 100 Innovators in Data & Analytics.
He further mentioned that the call for ethical AI implementation is being echoed by several organizations, including UNESCO, which emphasizes the need for transparency and explainability in AI systems to protect human rights and basic freedoms. UNESCO advocates for strict oversight and impact evaluations to avoid conflicts with human rights norms. Furthermore, the UN High Commissioner for Human Rights has stressed the need for regulations that prioritize human rights in AI development.
This includes evaluating the risks and impacts of AI systems throughout their lifecycle, ensuring that technologies that fail to comply with international human rights standards are either prohibited or suspended until adequate safeguards are in place. “As AI continues to advance, it is essential for stakeholders to maintain an ongoing dialogue about the ethical consequences of these technologies, ensuring they are developed with a focus on fairness and inclusivity,” he added.
Where does Pakistan stand?
Pakistan is in the initial phase of formulating comprehensive regulations and ethical standards for artificial intelligence. Similar to other nations, Pakistan is recognizing the growing significance of AI governance. Muhammad Aamir, Senior Director of Engineering at 10Pearls, emphasizes that as the Personal Data Protection Bill progresses, the regulations must be robust enough to protect individuals' privacy rights, especially concerning AI applications.
Securing data in line with international standards is critical. Simultaneously, AI developers and users will require well-defined guidelines that ensure transparency and accountability in algorithms. Standards for explainability and audit trails will play a vital role in this. Ethical issues, such as bias and fairness, must be addressed to prevent AI systems from incorporating inherent discrimination.
Notable examples, such as the Gender Shades project, reveal alarming error rates, with up to 34.7% for darker-skinned women in facial recognition systems compared to just 0.8% for lighter-skinned men. Sector-specific regulations in areas like healthcare, law enforcement, and surveillance are essential to ensure responsible AI use in critical sectors.
In the education sector, prioritizing equitable access to AI learning is essential, alongside considering its impact on the job market. For AI research and public sector implementation, ethical guidelines and transparent practices will be crucial to building public trust.
He further adds that specific provisions should be made for women and individuals with disabilities, ensuring inclusivity in AI education and access to resources. Overseeing these efforts, the AI Regulatory Directorate, under the National Commission for Personal Data Protection, can ensure ethical compliance across all areas.
The author is the head of content at a communications agency.