How AI is changing the employee / employer relationship
This page is part of The Workplace AI Strategy Guide
This page is part of a step-by-step guide to Workplace AI strategy, which I'm currently in the process of writing. I'm creating it like an interactive online book. The full table of contents is on the left (or use the menu if you're on a mobile device).
What's this guide all about? Check out the intro or full table of contents.
Want to stay updated when this guide is updated or things are added? There's an RSS feed specifically for strategy guide content. RSS feed for guide updates .
This page is incomplete!
This page is part of my step-by-step guide to Workplace AI, which I'm in the process of writing. I'm doing it in the open, which allows people to see it and provide feedback early. However many of the pages are just initial brain dumps, bullets, random notes, and/or incomplete.
There's an overview of what I'm trying to accomplish on the "What is this site about?" page.
Want to stay updated pages get major updates or things are added? I have a news feed specifically for guide content here: RSS feed for guide updates .
—Brian (July 2024)
- What do you measure?
- What is productivity?
- What’s it mean to work?
- Using AI to enhance existing processes versus creating new approaches
- Impact of AI on employee career development and growth
- AI in performance management & appraisals
- AI for employee engagement & retention
What do you measure?
In June 2024, a story made the rounds about Wells Fargo firing more than a dozen employees whom the company accused of pretending to work by faking computer keyboard activity. (One commenter on The Register’s site wrote that in response to requests for further information, Wells Fargo replied with jkljkljkljkljkljkljkljkljkljkljkljkljkl
.)
Wells Fargo said the employees were, “discharged after review of allegations involving simulation of keyboard activity creating the impression of active work,” and that “Wells Fargo holds employees to the highest standards and does not tolerate unethical behavior.”
Instead, I want to use this story to highlight a complexity that’s rapidly emerging in the workplace:
What are employees actually paid for in the era of AI and automation, and how do those compensation structures go haywire?
There’s a logical inconsistency around why these employees were dismissed when thinking about it based only on the high-level information available. And while digging deeper isn’t possible since we don’t have all the details, it’s still a useful thought exercise which provides the perfect backdrop to discuss the complicated issues that are starting to arise in workplaces all over the world.
So, let’s step through some hypothetical scenarios about what might have happened and why these employees were fired.
Scenario 1: Employees were not completing their work
In this scenario, I imagine that instead of doing their actual work, these employees used a keyboard simulator to make it seem like they were working. But since the keyboard simulator was not a real human, the employees didn’t get their work done.
It’s possible that some type of activity monitoring system was fooled, causing the employees’ “attendance” or “time spent at the computer” to look good. But because they weren’t finishing their work, they still got fired.
In this scenario, the cause of dismissal was non-performance, not keyboard simulation. (In fact if this was the actual scenario, then why would Wells Fargo even mention the keyboard simulation?) So my guess is this was not the real scenario.
Scenario 2: Employees were successfully completing their work
In this scenario, I imagine that the employees were able to get their work done, so they were not directly fired for performance-related issues. This begs the question, “If the employees were able to get their work done, then why were they even using the keyboard simulator?”
I could think of a couple of reasons for using the keyboard simulator:
- Scenario 2A: The keyboard simulation was used to make it look like they were working slower/longer than they were.
- Scenario 2B: The keyboard simulation was used to successfully complete their work.
- Scenario 2C: The keyboard simulation didn’t impact their work, but was a security policy violation.
Scenario 2A: The keyboard simulation was used for sandbagging
In this scenario, the employees were successfully getting their work done, but some other workplace pressure led them to simulate additional keyboard activity. I can imagine a few plausible reasons for this. Maybe there was a culture of overworking, and even though these employees were able to complete the work tasks asked of them, they felt they needed to appear to be working longer than they were. Maybe the keyboard simulators prevented their devices from sleeping and kept their status in Teams as active even though they’d left for the day.
Or, instead of faking the keyboard activity to work “extra” hours, maybe they used it to fake daytime working hours? Maybe these employees were able to do their actual jobs in half the time as others, and rather than taking on more work, or working slower so as not to be assigned more work, they just worked at their normal speed and then implemented the keyboard simulator to make it appear they were still working when in reality they were done.
If either of these cases were true, then the dismissals weren’t really about the keyboard simulation. If Wells Fargo didn’t like that the employees misrepresented their working hours, then the reason they were fired was for misrepresenting working hours, not for faking keyboard activity.
Scenario 2B: The keyboard simulation was used to successfully complete their work
We also have to consider the scenario that the keyboard simulation was more than just a random clicker to keep their screens awake. What if, instead, these employees actually automated enough to get real work done? For example, maybe they had very simple and repetitive tasks which they were able to automate via something like AutoIt, Open Interpreter, or any one of the dozens of AI-powered RPA (Robotic Process Automation) tools which are now marketed directly towards employees.
I could see a scenario where they were technically fired for lying to their employer about how much time they spent working, but wow, if these employees were able to automate a large chunk of their own job, I would hope the company would figure out a way to offer them the opportunity to help automate more tasks for other employees. So I wouldn’t think this was the real reason.
Scenario 2C: Security policy violation
Maybe the issue was as simple as Wells Fargo didn’t like that the employees installed unauthorized software, and they were fired for that. This also seems implausible, since if the company was so serious about people not installing unauthorized apps then there are ways they can prevent that. And besides, if this is what the employees were fired for, the reason would be “security policy violation”, not “faking keyboard activity.”
Broader implications for the future of work
While it’s fun to speculate about the details of what really happened at Wells Fargo, you can see that you don’t need to know the details to raise a bunch of really interesting questions that companies are going to need to address pretty soon. Each of the plausible scenario & response pairs outlined above is some version of this:
What employees thought they were being paid for, and what companies thought they were paying employees for, were not aligned.
This is something that has always been fuzzy in knowledge worker / office jobs. Most of us think we’re paid for “40 hours of work per work” or some similar thing. But what is “work?” Sitting at your desk for 40 hours a week? “Doing emails” for 40 hours a week? Anyone can find mindless busywork to do, but most jobs measure some type of quantitative work output produced by an employee.
The problem is the way that output is measured is becoming easier to create with AI.
Imagine that my job is to write three marketing briefs per work, which might take 24 hours total. (The other 16 hours a week are general office overhead—team meetings, forced fun, etc.) But if I can use ChatGPT to cut the time to write a marketing brief from 8 hours to 4 hours, now I have an “extra” 12 hours per week.
So what should I do with that?
What would you do with that?
I mean, before ChatGPT (e.g. last week), I was reliably paid my salary to crank out three marketing briefs per week. So if I can use ChatGPT’s help to now generate 6 per week, shouldn’t I get something like double the pay?
But what if my company doesn’t see it that way, and they want to pay me the same amount now that I’m creating 6 briefs per week instead of 3? Do I just go back to doing 3, and find some other way to fill those extra 12 hours per week? What if the company tracks how much time I spend on my computer? Should I work artificially slower? Should I download keyboard simulation utility that keeps my Teams online status green while I’m playing golf?
And now imagine that every employee in a company is struggling with their own version of this? Suddenly how a company responds to this is a really big deal. Ignore it and you’ll see your quality slowly degrade. Punish it and your best employees will leave. The only viable option is to embrace it.
Sidebar: How a company embraces employee automation is largely driven by how the company’s current business constraints. e.g. if every employee doubling their output means the company can double their revenue, then the company will embrace it. But if the company cannot grow or otherwise benefit from doubling employee output (e.g. maybe a fixed total market or other externalities), then the result of doubling employee output will be cutting 50% of the employees. In those cases, the employees might realize this first and revolt or not embrace technology. Which is fine, as long as the company doesn’t have a competitor who does...
The danger of perverse incentives
There are so many issues to consider here—certainly more than we can cover in a single article. But what we’ve seen from the hypothetical Wells Fargo situation is that in a workplace with increased AI and automation, the traditional measures of productivity and work are breaking down.
Traditional incentive structures honed over decades become perverse incentives almost overnight once AI is added to the mix. When existing metrics (e.g. number of marketing guides written) no longer align to the relative level of effort needed to create them, a business can quickly find itself with incentives that incentivize the wrong thing. We saw this in all the potential Wells Fargo scenarios: focusing on keyboard activity instead of actual work output, encouraging employees to appear busy rather than being actually productive, and/or discouraging efficiency and innovation in work processes.
The danger of perverse incentives is not new. Everyone’s heard anecdotes of IT admins who replaced their own jobs with shell scripts and now just watch Netflix all day. But AI is an accelerant to all this. The explosion of AI-powered tools means that even “regular” non-technical employees can start automating parts of their jobs, and the flexibility and “intelligence” in these tools allow them to be used in scenarios that were not possible to automate just a few years ago. And of course, the constant technological progress in the AI space means the tools will continue to grow in power and handle ever more complex tasks.
What is productivity?
At the most basic level, it’s the ratio of getting a certain output from a certain set of inputs.
The complexity is what you measure is what people do. So what do you measure, and are those the right things?
Redefining productivity in the AI era
Note that “#FUTURE” is a placeholder for me for something I want to expand on in later chapters, so I just have it here so I can search for it later to make sure I get everything.
Traditional methods of measuring productivity (e.g. hours worked, tasks completed, etc.) are most likely no longer relevant. So companies need to shift to figure out how to measure value creation instead of just the output. (But they also need to figure out how to incent workers to want to find more value, instead of sandbagging.)
Somehow AI needs to be factored into this, when calculating productivity. You don’t want to say no to it, after all, it’s a tool, and you wouldn’t want to take away word processors and make everyone use typewriters. But also you need to understand when AI is getting in the way, subtly “completing the task” versus creating the actual value. #FUTURE.
Quantitative vs. qualitative measures of productivity
A lot of knowledge work / office work is qualitative (explain qual v. quan?), things like creativity, problem-solving, innovation, thinking, etc. are hard to measure. Metrics tend to track quantitative things. (How many marketing papers did you write? How many customer meetings did you have?) Companies need to figure out how to track both as they think about tracking productivity.
The challenge of measuring cognitive work and creative output
How do you measure the value of qualitative work? How do you account for “thinking time” that doesn’t lead to immediate tangible results?
How do you value failed experiments and iterations in the creative process? Do you want to “power through” those really fast? If so, poeple will just use AI for them. Is that what you want?
More output doesn’t always mean more value
It’s possible to get too efficient. Hey! I got the “answer” in 20 minutes instead of 2 days. But what’s the quality of that answer versus if it took the slower, more human process?
But you also need to protect against busy work where people are faking that and then using AI.
When and how do you prioritize quality over quantity?
Balancing efficiency with innovation and quality
The role of collaboration and human interaction in productivity
All the human things. Time for relationship building. Team metrics versus individual.
Building in time for development and other things which are human but don’t track towards the end goal.
need to incent workers #FUTURE
What’s it mean to work?
Shifting definitions of work in the knowledge economy
The concept of “work” has evolved quite a bit over the years. Traditional ideas of work were based on physical labor or time spent in an office. The pandemic opened our eyes to how those metrics no longer apply, but it’s hard to see what replaces it.
Work is increasingly about creating, manipulating, and applying information and ideas. It’s mental engagement and output.
Measurement is moving towards value created versus time spent. But traditional comp models are based on hours “worked”, or fixed salaries. There’s an idea that we shift comp towards output, but doing so often leads to perverse incentives, or a race to the bottom in a gig-style economy. (Why have employees if you can micro-contract every task?)
Valuing and comping on value creation could be huge, and the high performing people will love it. But also they’ll be incented to spin up as much “value” as they can, so you have to make sure you’re measuring the right thing.
What skills are valued in an AI workplace world?
Complex problem solving that requires a nuanced understanding and creativity. The ability to make strategic decisions, based on multiple factors, for the long term Understanding what AI generates and knowing where it can be used and how it can be incorporated, versus blinding pasting in results. Figuring out how to solve new and novel challenges. The ability to frame problems becomes as important (or more important?) than the ability to solve them.
Emotional labor and interpersonal skills in the AI age
EQ becomes important, the ability to empathize, motivate, inspire others. People management, leadership, professional relationships. Few want to work with an AI box all day. Understanding all this, and people’s reactions to it, are critical.
This will all change so fast
Continuous learning, being flexible and able to adopt new technologies. Not just for the new AI tech which comes out, but also for the new impacts it will have on the workplace and how work is done. And also how the company really works, what’s it’s core value, can it scale, what are the gates, etc.? #FUTURE
Using AI to enhance existing processes versus creating new approaches
Pros and cons of enhancing existing processes with AI
This is huge, we don’t want to create the “faster horse”, and a lot of the existing processes exist because creating work is difficult. If AI increases the speed of that, it’s possible that this chokes things. The “3 meeting problem”, if AI allows you to attend three meetings at once, I would argue you need to look at why you’re having meetings in the first place.
Lots more to expand here, maybe this is its own section? #FUTURE
Longer term, the potential for AI to enable entirely new business models? But let’s not get ahead of ourselves. Think about the Sam Altman approved approach.
Balancing quick wins with long-term transformational change
Reimagining workflows with AI: AI can do the “easy” stuff now, and fit more within the existing processes. RPAs, etc. But maybe that’s temporary? What will the final workflow look like?
Overcoming resistance to change when implementing new AI-driven approaches: Move this to the planning section? #FUTURE
Impact of AI on employee career development and growth
A couple of points here:
First, the new skills which will be required, lifelong learning, etc.
But also, how AI can be used for training and career development.
Can we think ahead to how AI might impact things, what skills will be important, etc.?
What do we still need humans for? Both for tasks, and human-to-human things (mentoring, peership, etc.) The role of human mentorship in an AI-driven workplace
AI in performance management & appraisals
Scary! All the bias issues are real, but I’m not going to talk about them here (because they are already talked about and I’m not an expert, not because they’re not important.)
But I love the idea of AI providing feedback more often, and more personal and objective. But not replacing human feedback.
Performance metrics challenges goes back to how we opened this section, what do you measure, what do you incent, etc.
There will be a need to balance the AI with the human judgement.
AI can help different levels of employees in different ways. High functioning employees will most likely embrace it, because they want to be pushed and work with someone “smarter” than them.
AI for employee engagement & retention
Need to fit in the whole “future of Recall” narrative. Here? #FUTURE?
AI can gauge employee sentiment and satisfaction.
It should be able to figure out how to keep employees engaged. (Cool, but too future ish?) Same for its ability to create meaningful work experiences, etc. Gamified everything, etc.
Can it improve work-life balance?
Lots to explore here.
Unsorted
The line blurs between what employees own and what the business owns. What is the essence of being a human employee? Why do they work there, what do they love about it?
What if you have lots of paperwork, can’t AI help with this?
YES!
But in that case, what’s the point of all that paperwork?
How can individual employees resist the temptation to click the “write it for me!” button.
When the content is good enough, will the human even care? Meh. AI is only good enough to create 80% value content. But, that’s fine, by definition 80% of employees will fall within the 80% percentile.
Over time, AI generated content becomes ubiquitous, and what’s left for the org? Does it even need to be “pre-generated” at all? Can’t it just be generated on the fly? What’s the point of having a “content strategy” if the content is just generated on the fly? (Or is that the strategy?)
Ask yourself:
- What processes still matter in an AI generated world?
With AI it’s possible for anyone to check whatever boxes of “work output” was needed. Sure, the quality wasn’t there, but the quantity will be. This can make things worse, especially for those employees who are not using AI. Not only will be left behind in the tools department, but they will bear the brunt of having to read more words (of lower quality) to keep doing their same job. Employees who don’t use AI will see their productivity decrease / will have to work harder to keep doing the same job.