The current Workplace AI landscape
This page is part of The Workplace AI Strategy Guide
This page is part of a step-by-step guide to Workplace AI strategy, which I'm currently in the process of writing. I'm creating it like an interactive online book. The full table of contents is on the left (or use the menu if you're on a mobile device).
What's this guide all about? Check out the intro or full table of contents.
Want to stay updated when this guide is updated or things are added? There's an RSS feed specifically for strategy guide content. RSS feed for guide updates .
This page is incomplete!
This page is part of my step-by-step guide to Workplace AI, which I'm in the process of writing. I'm doing it in the open, which allows people to see it and provide feedback early. However many of the pages are just initial brain dumps, bullets, random notes, and/or incomplete.
There's an overview of what I'm trying to accomplish on the "What is this site about?" page.
Want to stay updated pages get major updates or things are added? I have a news feed specifically for guide content here: RSS feed for guide updates .
—Brian (July 2024)
In this chapter, I want to take a deeper look at what’s happening, today, in the real world,
- What’s happening out there?
- How did we get here?
- Is this even “AI?” (Does it matter?)
- Things are changing fast, recent milestones
What’s happening out there?
As we covered in the last chapter, my view is that the biggest issue today is that employees can directly access extremely powerful AI tools on their own. So if you don’t provide the AI tools your employees want to use, then employees will just use them anyway.
Microsoft and LinkedIn did a survey in May 2024 where they asked people whether they wanted to use AI at work, and whether they did on their own. The full suvey results are here: 2024 Annual Work Trend Index from Microsoft and LinkedIn
I pulled a few visualizations from the survey which are important for our conversation. First is one showing that 3 out of 4 people use AI at work, a number which almost doubled in the first part of 2024:
Next is this one showing that bringing your own AI to work (BYOAI) is something that employees across all age spectrums do, e.g., it’s not just Gen Z or young people who are doing it:
How did we get here?
Most of the AI strategy books I read recount the whole history of AI going back to the 1950s. There’s a lot of fun history and trivia there, but in the context of trying to wrap your head around workplace AI, none of that matters.
Honestly how we got here was simple. In November 2022, OpenAI made ChatGPT widely available to the public. 100 million people (!) used it within the first few months. Almost immediately, people started using it at work, usually on the down-low without telling their managers. Even though ChatGPT often hallucinates, and can sound somewhat “robotic” (especially when people don’t take the time to become skilled prompters), the reality is that ChatGPT, even in 2022-23, was “good enough” for what people were using it for in the workplace.
Suddenly “AI” was everywhere and all the rage. Other companies (Microsoft, Google, Apple... everyone) raced to release AI products or to talk about AI, and just like that, here we are.
Most AI tools today are “Generative AI”
They’re called generative AI (or “GenAI”) because they generate new content based on existing content. Large language models (LLMs) like ChatGPT and Microsoft Copilot, diffusion models (image & video generators) fall into the generative AI bucket.
So even in the case of document or meeting summarizations, it’s still GenAI even though what they’re generating is just a small fraction of what you input.
Natural language processing, speech recognition, and computer vision systems are not GenAI.
Is this even AI? Who cares!
Ironically, because the vast majority of tools that employees use for work are LLMs (large language models), there’s a debate among AI purists as to whether LLMs are “true” AI, and, therefore, whether calling this whole space “Workplace AI” or “AI in the workplace” is even accurate.
I’m not educated enough on AI science to have an opinion on whether LLM-based tools are real AI or not, but what I do know is:
- Everyone calls them AI, and
- Regardless what they’re called, we still have to deal with them.
So I’m calling this AI because everyone else is calling this AI.
Things are changing fast
TODO: Look at the capabilities and improvements from GPT-1, 2, 3, 4. Look at how many big models come out, growth of chips, growth of power capacity. Explain linear scaling.