GOOGLE GEMINI AND THE ISSUE WITH ITS AI

Google Gemini was launched towards the backend of 2023, and was introduced as the company’s “largest and most capable AI model” to date. According to Demi Hassabis, CEO and Co-Founder of Google DeepMind, “Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.” Gemini comes in three different models that are designed to provide flexible AI-driven solutions based on the device and complexity of the task at hand.

Gemini Ultra is designed for complex tasks; Gemini Pro is attuned to performing across a multitude of tasks whereas the Nano version is for “on-device” tasks. Gemini Ultra is the first of its kind for a few reasons, one of the main ones though is that it achieved a score of 90% in comparison to the human experts on a Massive multitask language understanding (MMLU), which draws on a combination of 57 subjects including global knowledge, maths, physics, law and more.  In comparison to Gemini Ultra, GPT-4 reportedly scored 86.4% from a general perspective. The idea behind Gemini is that it is able to process and therefore understand multiple formats of information such as images, audio, video and text.

While all of this sounds amazing, Gemini has not been without its issues, with many being quick to criticise its very real shortcomings. Gemini is Google’s equivalent to ChatGPT, and is able to answer questions in text responses, however where its biggest issues have initially arisen, are in its ability to generate images based on prompts. The inaccuracies contained in the images generated off the back of such prompts have included men and women of the wrong genders and races featured in historical depictions of the US Founding Fathers as well as German World War Two soldiers. Users of the AI so far believe that it has been designed to be ‘overly politically correct’ and is based on biased data. This has been showcased in some of the AI’s text responses to questions too, including whether Elon Musk posting memes on X could be equated to the severity of Hitler’s genocide.

Google hasn’t necessarily acknowledged Gemini’s shortcomings publically, but an internal memo notes that it’s AI has “offended our users and shown bias”. Although Google is confident that fixing the issues of bias and political correctness and inaccuracies will only take a number of weeks, other experts believe that it is not that simple given how complex human history is, and the amount of data an AI will need to be exposed to in order to try and formulate responses that aren’t deeply biased or problematic.