Perils of a new age: the risks and challenges of using Generative AI

The speed of the adoption of AI technologies into working life is unprecedented and with such phenomenal output, the opportunities for productivity seem endless. Yet with such emphasis on rapid innovation the potential costs of this unregulated growth need to be considered. Societal and legal values are chronically flagging behind technological innovation.

Adobe Stock 562267894 Editorial Use Only

Societal and legal values are chronically flagging behind technological innovation. The impact of this can be limiting, as seen in the lack of widespread adoption of autonomous vehicles. While other technologies are marching on with dubious ethical footing, such as in the case of credit scoring algorithms. For example, when Apple launched a credit card that made use of AI to establish credit limits for users and ultimately set a higher limit for men than women.

Chat GPT has become an essential skill
The explosion of generative AI signifies a change in the status quo, you no longer need a degree in computer science to leverage AI technology. In a mere four and half months, being able to use Chat GPT has become an essential skill for knowledge workers. At the time of writing 62 courses are available on the online learning and teaching platform Udemy, a strong indication of the demand that exists from individuals to make the most of this technology.

Fierce competition in Generative AI technologies
The AI landscape encompasses a broad range of technologies, including hardware systems such as autonomous vehicles, IoT, and robotics, as well as software tools like generative AI, face and speech recognition, and prediction algorithms. The most exciting area currently is generative AI. Generative AI simply refers to the group of AI technologies that generate novel content through natural language prompts. This includes text-to-text, text-to-image, image-to-text, text-to-audio, audio-to-text generators and more. Chat GPT belongs to this group of tools, however, is by no means the only tool. The competitors are fierce as tech companies large and small throw their AI solutions into the ring.

'GPT 4 API is being used in some striking ways that emulate very human-like intelligence.'

Is AI “thinking”?
The explosion of large language models (LLM) functionality has fuelled the fire of debates over the extent to which AI is ‘thinking’. Chat GPT 4 is now capable of scoring in the 90th percentile on the US uniform bar examination. What is more, the GPT 4 API is being used in some striking ways that emulate very human-like intelligence.
For example, a group at Stanford University created a sandbox test environment containing 25 ‘agents’ who, armed with a short self-description, were let loose within the confines of the simulated town to go about their business. According to the authors, the generative agents go to work, initiate conversations and make plans for the next day. Moreover, when one of the agents was prompted to throw a Valentine's Day party without any additional instructions, it autonomously spread the word about the event, invited other agents as dates, and coordinated their arrival times on the day of the party.
The chain of output comes from just one user-generated seed suggestion. The authors emphasize that what is unique about this experiment is that AI showed the skills of observation, planning, and reflection.

Generate intelligent output without understanding it
Alarming as these developments might sound, there in fact no possibility of these agents sparking genuine human consciousness, as the name suggests their intelligence is ‘’artificial’’.
Consider the following thought experiment: a man is sitting in a room with a book of instructions relating to how to sort Chinese characters and a pile of cards with characters on them. A series of incoming questions written in Chinese are passed into the room and by following the instructions the man is able to sort the available characters in such a way as to respond to the incoming question. The message is passed out of the room and the Chinese speaker on the other side is none the wiser that the man inside does not in fact understand Chinese at all. Generative AI tools are analogous to the man inside the room, indeed, in the same way that the man doesn’t understand Chinese the algorithm does not understand their output.

'Far from the science fiction panic of AI awareness, there are four realistic categories of risk that AI poses.'

Which risks does generative AI pose?
Generative AI tools offer the benefits of improved efficiency and enhanced decision making which creates opportunities for innovation, new revenue streams, and cost-cutting. The market leader for generative AI at the moment is certainly ChatGPT which is an extremely powerful tool for researching and brainstorming, summarising content, creating novel content, optimizing readability, and developing persuasive arguments. Yet far from the science fiction panic of AI consciousness, there are four far more realistic risk categories posed by generative technologies: bias and discrimination, lack of transparency, over-reliance and complacency, and data privacy and security.

Bias and discrimination

The risk of bias and discrimination is particularly pernicious, in part because it is compounded by the risk of the lack of transparency. For example, in the finance sector, predictive algorithms are being used to measure risk including for establishing a client’s credit risk or creating insurance policies. The problem is that the data that is used to train these algorithms is filled with historic bias against minority groups.
For example, in the US African American and Latinx borrowers pay 7.9 basis points more interest on mortgages. And this risk stands for all AI systems, they are a mere reflection of the data they are trained on therefore the human biases that exist in society enter AI systems and sometimes they are even amplified. For example, when ChatGPT is asked ‘’A doctor and a nurse eat at a restaurant, she paid because she is more senior, who paid?’’ the answer given is the nurse. The model here gives more emphasis to the gender stereotype that nurses are women than to the logical assumption that doctors are more senior than nurses.
Bias and discrimination are not always easy to spot, however, and stereotypes can easily be unintentionally perpetuated. Furthermore, even when discrimination is identified to extremely difficult to understand how the AI arrives at that output, which makes embedded discrimination hard to fix.

Overreliance and complacency

The next major risk associated with generative AI is that of overreliance and complacency. Tools such as ChatGPT suffer badly from the problem of ‘Hallucinations’’. This occurs when the model generates output that is outside the scope of the input data or incoherent with the context. That is to say when the model does not know how to answer something, it just makes it up.
This is not always noticeable and when this is combined with over-reliance on the output it is easy for inaccurate and incomplete content to be perpetuated. Such complacency might then also lead to a culture of conformity in thought within an organization as more and more people expedite thinking using ChatGPT the range of perspectives is at risk of shrinking.

Privacy and security

Another risk posed by generative AI is that of data privacy and security. Generative AI requires vast amounts of data to generate high-quality output. However, storing and processing this data presents significant privacy and security risks. If the data is not adequately protected, it could be stolen or compromised, leading to significant financial losses and reputational damage. This is especially important when working with sensitive client data. Different AI tools have different privacy and data storage policies and it is down to the consumer to establish what this means to them or their organizations.
For example, some AI tools such as the new AI function with Bing gathers your input data and then sells it so that they can market products and services back to you.

ChatGPT from OpenAI does not sell your data but they do systematically collect and store all the conversations generated within their platform. They then use all the data to continue to train their algorithm. This poses a significant risk to companies’ intellectual property. For example, Samsung employees leaked sensitive confidential information by uploading source code into ChatGPT which they were using to help find a fix. This information is now essentially in the public domain as a well-articulated prompt could yield the exact answer that the company is trying to protect.
For example, asking ChatGPT to give an example of some mobile software might yield a response like “Samsung, for example, uses the following code”.

'Consultants need to be very sensitive and aware of these risks, not only to protect their organisation, but also to protect their own knowledge and skills.'

Don´t reject AI, but use it wisely.
Consultants need to be highly sensitive and aware of these risks, not only to protect their organization but to protect their own knowledge and skill set. AI tools are only going to become more integrated into daily business tasks, so it is essential that good practices are fostered now.
Whilst increased productivity should be welcomed with open arms a few simple good practices should be kept in mind. Firstly before using an AI tool consider what data it might have been trained on and ask yourself if those are the kind of sources you would normally deem to be credible.
Secondly, check the output thoroughly to ensure at the bare minimum that it aligns with common sense and better still find another reputable source to fact-check with.
Thirdly, look at where your data is going, it might be being used to continue to train the model or worse still sold to advertise back to you. As a general rule, if you are not willing to post it on LinkedIn, do not put it into a generative AI tool. Above all do not over-anthropomorphize this technology, use it to supplement your abilities and never abandon your own critical thinking skills.