The robot revolution is here in the form of Artificial Intelligence (AI). Once a futuristic concept, AI related technology is being embedded unknowingly in most of the technology that we utilise on a daily basis. Mark Zuckerberg’s brain-child, Facebook has been one of the largest testing platforms for the development of artificial intelligence; with Zuckerberg being a major advocate for the advancement of AI technology including robots. In 2017, a team of researchers at the Facebook Artificial Intelligence Research facility built a ‘Chatbot’ that was designed to ‘learn’ how to negotiate by imitating the human trading and the bartering system. However, after the two programs (affectionately nicknamed Alice and Bob) were paired together in order to begin the trading process, the Chatbots began to deviate from the predicted pattern of communication namely a taught human form of communication, and created their own, distinct form of communication. According to the researchers involved, the conversation led to a “divergence from human language as the agents developed their own language for negotiating.” The seemingly independent thought process behind the Chabot’s’ divergence caused a media stir and a significant amount of sensationalism around the outcome of the scientific experiment. However, Facebook’s scientific paper publication containing the underlying software and data set for the experiment seems to have assuaged concerns over the experiment and its intentions for now.
Zuckerberg is not the only techpreneur with a growing interest in the AI field, OpenAI – an Elon Musk-backed AI research company is founded on the premise of “discovering and enacting the path to safe artificial general intelligence.” However, with the creation of artificial intelligence and its growing and wide-spread presence in technology already being utilised on a daily basis, the implications for human rights need to be taken into consideration. One of the biggest current concerns regarding the development of AI technology is that it will render certain sectors of the human labour force redundant. Another area of concern revolves around the development of LAWS: the Lethal Autonomous Weapons Systems which some have termed “killer robots”. In essence, the robots would ultimately be able to assess, select and engage with targets without actual human intervention. The Human Rights Watch Organisation deems it doubtful “that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity.” Another issue raised with regard to the utilisation of AI is how facial recognition software is implemented and how it impacts on existing privacy laws. Lastly, human rights Professor Philip Alston delineates the potential human rights infringements that could occur through the use of what is defined as ‘predictive policing’ which is a data-driven technology that results in specified data collection for the purposes of crime prevention. The prospective issues in such a technology are its ability to be used discriminatorily as well as possible contraventions and violations of citizen privacy rights.
As of 2017, the implications of AI technology on human rights have come into the spotlight at the UN Human Rights Commission, as was acknowledged at the 36th annual HRC session where an event explicitly aimed at addressing the issue of AI and human rights was held. With the accelerating rate of AI advancement, the development of an ‘ethical AI’ framework will be a necessity in order to minimise the future implications of such a technology on human rights.