Risks of employees using AI in the workplace
This page is part of The Workplace AI Strategy Guide
This page is part of a step-by-step guide to Workplace AI strategy, which I'm currently in the process of writing. I'm creating it like an interactive online book. The full table of contents is on the left (or use the menu if you're on a mobile device).
What's this guide all about? Check out the intro or full table of contents.
Want to stay updated when this guide is updated or things are added? There's an RSS feed specifically for strategy guide content. RSS feed for guide updates .
This page is incomplete!
This page is part of my step-by-step guide to Workplace AI, which I'm in the process of writing. I'm doing it in the open, which allows people to see it and provide feedback early. However many of the pages are just initial brain dumps, bullets, random notes, and/or incomplete.
There's an overview of what I'm trying to accomplish on the "What is this site about?" page.
Want to stay updated pages get major updates or things are added? I have a news feed specifically for guide content here: RSS feed for guide updates .
—Brian (July 2024)
I guess it says something that I put the “risks” section ahead of the “benefits” section. This stems from the fact that the risks apply to all companies. As I outline in the previous section, even if you think AI is all hype and you don’t want to deal with it, you will still have individual employees who will find and use consumer-facing tools like ChatGPT on their own, whether you want them to or not. So every company must deal with the risks.
(There are many actual, non-BS benefits to properly supporting employee use of AI, as I’ll outline in the next section. But of course those only apply to companies who support them, while the risks apply to all companies.)
The biggest risk? Employee & company culture.
My personal view is the biggest risks to unchecked employee AI usage are in the areas of company culture and employer-employee relationships. Sure, there are security and compliance and all sorts of other things to deal with, but those are more like “traditional” IT risks which IT knows how to handle.
The actual AI risks are pretty different, and they involve more than IT. I’ll use an example to illustrate.
Even today (or in the coming months), AI tools like Microsoft Copilot, Apple Intelligence, Google, etc.) will offer AI assistance is personalized, ubiquitous, and “helps” people so seamlessly and smoothly that they hardly even notice it. Over time, more and more writing output and work product will be generated by AI and not the human employee. All of these tools have access to employee apps, screens, and data. These companies stated goals are to get their AI to know the user and their work style, and today’s chat-based “type—wait—response” loop will be a thing of the past.
Even for people who think, “I’m not going to succumb to AI for everything,” it will be extremely difficult to resist, (or even to know you’re resisting). When things are so smoothly integrated, you’ll need to exert hundreds of microbursts of willpower every a day.
Let’s use Microsoft Office Copilot as an example. In Microsoft Word, in addition to the brightly-colored icon in the ribbon, a little in-line Copilot icon literally appears anytime you start a new line. It’s right there in your face:
The screenshots above show the experience if I write this section in Word. Personally I don’t want to use Copilot for content I write, but I also don’t want to disable it because I might want to use it for something else in the future, and, frankly, it’s interesting to click and see how good it is.
(How good is it? While the actual prose it generated sounds nothing like my writing style and is mostly generic platitudes, it also did a pretty decent job of summing up what I’d written until that point.)
In my case I would never want risk my core product (my writings and perspectives) by attaching mediocre content to my name. But what if this was a lower stakes document—something that I didn’t really want to do and “good enough” was good enough? Sure, maybe I’ll click the generate button “just to see how good it is” at first. Eventually I’ll start seeing points that I like, and as I get more comfortable with AI and the models get better, I’ll start the slow-then-fast slide into more and more of my work output being generated by AI.
Now scale this to most employees your company, and the negative consequences are staggering:
- Employees start to rely too heavily on generative AI, leading to a lack of original thought and creativity.
- Erosion of critical-thinking skills from consistently accepting AI-generated content.
- Quality of content diminishes because AI is “so good” that employees stop closely reviewing it.
- All work product becomes more generic. The best employees become mediocre.
- Employees “phone it in” for most tasks. (Hey that’s funnier now with GenAI on an iPhone.)
- This creates a self-fulfilling doom loop. (More employees using GenAI for more tasks leads to more employees using GenAI for more tasks.)
- Massive increase in productivity theater. More text is created because it’s easier to create, meaning people need to rely on more AI to summarize the text.
- Eventually all original human thought is passed through an AI expander / compressor loop.
- Why are we even doing any of this?
The ultimate kicker for most of the above issues is how would the company even know? I’m not talking about the (now quaint) “consumerization of IT” angle where employees are using AI without the company knowing, rather, I’m talking about the bigger existential issue of how will the company even know that this slow decline in employee and work quality is happening? This evolution will be slow (at first) and extremely difficult to quantify, track, and manage. (Sidebar: It’s already starting to happen.)
There’s a lot to unpack even in those few quick bullets I just cranked out off the top of my head, not to mention all the other surrounding issues. (Helping our customers understand and address this is what I’m focusing most of my time on these days. It’s super interesting and complicated!)
Of course this is much bigger than Apple Intelligence. But more AI tool integration, more personalization, and increased speed all lower the barriers to use. Multiplying this by all the employees in a company is a slippery slope, and we need to get serious about how we’re going to address this ASAP.
Over-reliance and complacency
- Employees rely too heavily on GenAI for ideas, leading to a lack of original thought and creativity
- Erosion of critical thinking: Consistently accepting AI-generated content can weaken employees’ critical thinking and problem-solving skills
- Generated content seems really good! Will employees actually carefully review everything? Over time it’s easy to become complacent and accept everything from AI without paying attention
Quality & Accuracy
- Hallucinations
- Lack of fact-checking
Security & Confidentiality
- Does the vendor train on employee queries and conversations?
- Even if the big vendors don’t, if you block those and employees use other free ones, are they trustworthy?
Intellectual property concerns
- Who owns the content the AI produces?
- Some models are known to come close to plagiarizing. Do employees just paste or use text that is copied from somewhere else on the internet or the training data?
Employee culture impact
- Does the “generic” output of GenAI mean that employee content is now more generic?
- Does the company lose what’s special about its culture, values, etc.?
- When employees see how “good” AI is, do they lose hope? Do they mentally give up, and just “phone it in”, since they think AI can do their jobs for them?
- Does this create a self-fulfilling “doom loop”?
- Do employees not develop new skills, since they just blindly use GenAI?
Employee / employer relationship
- What is work? What are the employees getting paid for?
- If they save time with AI, is that free time for them? Are they expected to take on more work instead? What’s the real goal and benefit? What’s being measured?
More work for everyone!
- GenAI lets employees generate content easily. “Productivity theatre” where everyone writes longer emails, more emails, more reports, more content (we all show off how thorough we are).
- This forces employees to use even more GenAI to auto-summarize everything (email overload, meeting transcripts, more and longer documents)
- We take “real” human-created content, expand it out with GenAI, then use GenAI to reduce it down. Now we have essentially passed all company content through an AI expander / compressor loop. Does the good stuff even make it through? Do all work products become the generic GenAI versions?
- Bonus: we are paying AI vendors, putting company at risk, and warming the planet for this! Yay?
Bias & Ethics
- Large LLM providers are secretive about their training sources
- They are also secretive about their fine-tuning and system prompts
- Does the LLM provider’s bias now get baked into employee generated content?
Company Oversight
- For all these issues, how would you even know?
- This will be a slow degradation over time which is extremely difficult to understand or manage
- AI allows companies to automate their worst traits (employee monitoring, looking for reasons to fire people, etc.)
Fear not!
To be clear, these risks can be mitigated, which is really the whole point of this strategy guide. So don’t worry, this is just more about what could happen if you ignore the problem.