According to BCG’s latest survey, 39% of employees are worried about the development of artificial intelligence. After all, can you use AI safely and responsibly? Are jobs going to disappear? And what about bias, accountability, and regulation?
In this article, we’ll explore the impact of AI on your organisation and how to properly manage its opportunities and risks to protect the well-being of your employees.
The rise of AI in the workplace
What is AI? In a nutshell, artificial intelligence involves a machine performing complex tasks usually done by a human. These can include administrative tasks, analysing data, reasoning, and learning. But increasingly, generative AI also performs creative work – like writing an email with ChatGPT.
The reason behind the name ‘intelligent’ is that AI is self-learning. The more data you feed it, the wiser the technology gets.
The use of AI technology has skyrocketed in recent years. According to research by Boston Consulting Group, as recently as 2018, 22% of companies worldwide were using AI. By early 2023, that number had doubled to 50% of companies surveyed.
The benefits of AI for your organisation
The growth in the professional use of artificial intelligence is not surprising because of the many business benefits AI brings to the table.
AI’s most significant value lies in its ability to automate many processes. And that, in turn, leads to greater efficiency and significant financial gains. For instance, artificial intelligence could generate $15.7 trillion for the global economy between now and 2030, according to PwC.
Delving deeper into the economic benefits, we can discover three positive outcomes of AI:
- Innovation: AI enables many new ways of working (and living), especially when combined with algorithms.
- Higher productivity: By automating manual and routine tasks, machines can perform faster, error-free, and operate 24/7.
- Rapid data analysis: This can massively help with organisational decision-making.
Artificial intelligence has enabled many innovative business models that have become part of our daily lives, like the virtual assistant on your smartphone or a Chatbot that partly digitises customer service.
AI technology can process an enormous amount of information simultaneously – much more than a human could ever independently analyse in a lifetime. This information processing helps IT companies or other departments working with large amounts of data to build models and predict trends based on that data.
Retail or consumer goods companies are also using AI efficiently to enhance customer experience with personalised product recommendations. Just think of the intelligent algorithms behind the ads you see on Google, Linkedin, or Instagram, perfectly curated to your shopping preferences.
Artificial intelligence is also increasingly used in HR. It can make manual tasks more efficient – from payroll and screening CVs for recruitment to reviewing video job applications.
The risks you need to be aware of
AI’s return on investment is clear. Streamlining processes can save companies time, resources, and money.
But there is, undeniably, another side of the AI coin.
One of the risks of AI is that it is not inclusive. In 2018, AI was in the news for being biassed when assessing CVs for job applications at Amazon, which disadvantaged female applicants. And even in 2020, facial recognition technology from Microsoft and IBM still had a bias: it recognised the faces of Caucasian males better.
Other significant risks in using AI technology include:
1. Bias and inclusiveness
AI only gives output based on the data we feed it. It can only innovate up to the input it receives.
Therefore, one of the dangers of AI technology is that it can have biases, prejudices, and even give discriminatory results.
If you give AI biassed or non-diverse data as input, such as more data on one gender or skin colour, the output will also be biassed. It is best practice to avoid bias by using diverse data and regularly conducting a human check on the results of AI or the rules behind an algorithm.
2. Privacy and data security
AI and algorithms operate on vast amounts of data. And the question is whether every AI technology handles data equally. For instance, to use ChatGPT, you must create an account, and the chatbot stores all your searches.
Ensure you don’t just use customers’ or employees’ personal data for AI; always ask permission. And also, check the technology’s privacy policy – for example, whether data will be shared with other parties.
3. Transparency and accountability
Many AI-driven models make predictions or give answers without explaining the logic behind them. Have you ever submitted a question to ChatGPT? If so, you won’t get a source statement with the answer.
The machine gives the conclusion without clarifying what sources, evidence, and thought process it followed. This is a crucial issue if you are using AI to make decisions – for example, about a candidate’s suitability in recruitment or for diagnosing in healthcare.
4. Loss of human interaction
One risk of automation is that it results in less and less human interaction. And, in an era of rapid technological development, we have a greater need for social contact than ever.
Psychologist Britt explains: “Social health has a major impact on our quality of life. As humans, we need sufficient social contact for our well-being. As shown in study after study, good relationships are the biggest predictor of happiness.”
So if you want to use AI more often in your organisation, it is crucial to think carefully about how you want to maintain human contact.