A guideline for the responsible use of generative AI in the workplace

Jamie Sharp

Writer & Blogger

AI has been widely used for many years for fraud prevention, autonomous vehicles, voice assistants like Siri and Alexa, image recognition so you can unlock your smartphone with your face and a host of other applications from traffic monitoring to weather predictions.

We’ve become used to AI working in the background, making life easier and less stressful, and to a large extent, we barely noticed this was happening.

And then Generative AI took the world by storm, and AI is once again making headlines. Generative AI isn’t a new concept, as active research has been taking place since the 1960s when Joseph Weizenbaum developed the first chatbot ELIZA.

The chatbot was one of the first examples of Natural Language Processing (NLP). It was designed to simulate conversations with a human user by generating responses based on the text the chatbot received. Though the system was a relatively primitive rules-based implementation intended to synthesise a human conversation, it did pave the way for further developments over the next few decades.

The evolution of modern generative AI dates back to the early 1950s and is built on deep learning. These innovations took off in the 80s and 90s, and by the 2000s and 2010s, computational capabilities had advanced to the point that deep learning was practical.

However, it wasn’t until the second half of 2022, with the release of OpenAI’s ChatGPT, along with other diffusion-based image services like MidJourney and Stable Diffusion, that the media and mainstream computer users sat up and took notice.

In the UK, more workers are excited about generative AI’s use in their workplace (49%) than worry about it replacing them in their job (38%). This is in line with interest from management: almost half (46%) of UK people leaders say that their company is considering ways to use generative AI.

However, Salesforce’s research points to a lack of digital confidence across the UK workforce and the need for businesses to better explain how accessible digital skills can be – especially those related to AI.

What is generative AI?

I asked OpenAI’s ChatGPT to define generative AI; here are its thoughts –

Generative AI refers to a class of artificial intelligence (AI) models and techniques that are designed to generate new content, such as images, text, music, or even entire virtual worlds, based on patterns and examples found in training data. Unlike traditional AI models that are primarily used for classification or prediction tasks, generative AI models focus on creative output and the ability to generate novel, realistic content that resembles human-created data.

Generative AI models are typically based on deep learning architectures, such as generative adversarial networks (GANs) or variational autoencoders (VAEs). These models learn to capture and understand the underlying patterns and structures in the training data, and then use that knowledge to generate new data that exhibits similar characteristics.

For example, a generative AI model trained on a dataset of landscape photographs could generate new, realistic-looking landscapes that were not part of the original dataset. Similarly, a text-based generative AI model could be trained on a large corpus of text to generate coherent and contextually relevant sentences or even entire stories.

Generative AI has found applications in various fields, including art, entertainment, design, and data synthesis. It enables the creation of new and unique content, assists in creative processes, and can serve as a tool for exploration and experimentation. However, it’s worth noting that generative AI models may also raise ethical concerns, such as the potential for generating misleading or deceptive content and require responsible use and oversight.

The above highlights the potential power of generative AI – perfect grammar, narrative flow and tone.

McKinsey defines generative AI more succinctly in its article ‘What is generative AI?’ as algorithms that can be used to create new content, including audio, code, images, text, simulations and video.

A 2022 McKinsey survey showed that AI adoption had more than doubled over the previous five years, and investment is increasing at a similar level.

Generative AI clearly has the power to change how a range of jobs are carried out, though the full impact is yet unknown, and the risks involved are also still unclear.

What output can you get from generative AI, and what problems could it solve for businesses?

Content created by generative AI can be indistinguishable from human-generated content. Still, the quality of the results can depend on the model used and the quality of the input prompts to generate the content.

The options are infinite, and the speed of generative AI is phenomenal. For example, according to McKinsey, generative AI produced an essay comparing theories of nationalism from Benedict Anderson and Ernest Gellner in ten seconds and parodied an already famous passage that describes how to remove a peanut butter sandwich from a video recorder in the style of the King James Bible.

The downside to this technology is that it doesn’t always get it right. Generative AI can struggle with basic algebra, counting, and overcoming sexist and racist biases that pervade the internet.

For businesses, the opportunities are endless. Generative AI tools can produce a variety of credible writing in seconds and respond to criticism to make it suitable for a business’s requirements.

Industries that benefit from instantaneous, largely accurate code, like IT and software companies to organisations that need plausible marketing copy, could benefit from using generative AI.

What are the problems to overcome using generative AI in your business?

The chances are your employees are already using generative AI at work. A recent survey by Fishbowl app showed 40% of professionals have used ChatGPT or another generative AI tool at work, and 68% admit they’re using it without their boss’s knowledge or permission.

Unsurprisingly, your employees want to try out this new technology, and it could be advantageous to your business to encourage them to experiment and work with generative AI.

An MIT study of 444 white-collar workers showed productivity increased with the use of ChatGPT for writing and editing in marketing, grant writing and data analysis. ChatGPT users were 37% faster, and the quality of their work increased more rapidly with repetition.

Additionally, a new study from Stanford University in conjunction with MIT found that using a generative AI tool increased productivity for experienced workers by 14% and by up to 35% for novice or low-skilled workers. The study also found it improved customer sentiment and reduced the need for managerial intervention.

However, the use of generative AI does present some challenges.

Privacy

If your employees are using personal data, such as candidate information, customer data, employee records or your business’s intellectual property, to generate content, it could raise issues around data protection and compliance with regulations such as GDPR.

Similarly, if you have recruiters entering CVs or LinkedIn profiles into a generative AI tool to generate personalised messaging or job matches, this could also cause similar problems.

Your employees must be aware of data sources and the permissions required by GAI systems to ensure they don’t inadvertently violate privacy policies or laws when using generative AI.

Security

Your employees may inadvertently share sensitive data with the wrong people leaving your generative AI vulnerable to hacking or manipulation by cyber criminals or state-sponsored agents.

They need to be careful about the reliability and security of any generative AI they use, especially the risk of exposing your organisation to data breaches through the use of malicious ChatGPT Chrome extensions.

Inaccuracy and Ethics

Generative AI systems can generate factually inaccurate or inconsistent content, known as hallucinating, where they make up information that was not in their training data or the real world.

When your employees use generative AI, they need to verify and validate the generated content and correct errors and inconsistencies.

Additionally, generative AI systems can produce biased, offensive and harmful content to specific groups or individuals by reproducing existing stereotypes and prejudices.

Your employees need to consider the ethical implications and social impact of the content they produce using generative AI and not use them for malicious or fraudulent purposes.

Provide guidance to your teams to educate and empower

The potential for serious harm through the use of generative AI cannot be overstated, and it is essential that companies provide comprehensive guidance on how to use this innovative tool responsibly and in an acceptable manner.

Educate

Your employees need to have a basic understanding and knowledge of what generative AI is, how it works and how to get the best out of it by using prompts that will get them the results they are looking for. They also need to be aware of the potential risks and benefits.

You need to provide them with clear, unambiguous guidelines on the data they can use, who can access it, and how it needs to be protected.

Generative AI is a fast-moving technology, so your guidelines and education programs will need to be updated regularly to ensure they remain relevant and accurate.

Empower and Evaluate

Let your employees explore and experiment with generative AIs that are useful to their work and could increase their productivity. Then, support them with education, guidance and feedback on how to use them safely and effectively.

Finally, monitor and evaluate the effectiveness of generative AI on performance, business outcomes and customer satisfaction using metrics like quality, accuracy, and engagement to measure the value provided by generative AI systems in your business.

Enforce

To ensure that generative AI is being used safely and effectively in your business; you’ll need to establish and enforce clear policies for the responsible use of this technology using codes of conduct, audits, best practices and checklists.

You are responsible for ensuring your employees comply with all legal and ethical requirements.

Conclusion

Generative AI provides organisations with the opportunity to increase efficiency and productivity within their business, but it will require educating their employees on how to use it responsibly and legally.

​​Workers clearly recognise that the path forward requires reskilling and want to be part of it. An emphatic 96% believe businesses should prioritise AI skills as part of the strategy to develop their workforce.

When used correctly, it opens up new opportunities for employees and businesses alike to thrive and quickly add value for their customers, but comprehensive guidance and education is essential to avoid falling foul of privacy or security guidelines.

Find out how our award-winning, on-demand recruitment solutions can reshape the way you meet your hiring needs.

eSift
Abbey House, Farnborough Road
Farnborough, GU14 7NA
T: +44 (0) 1252 624 699

Useful Links

Social Media

Achievements

© 2023 eSift. All Rights Reserved.