Skip to content

What can AI realistically do today?

Brian Madden
Brian Madden,Workplace AI Consultant & Analyst
This page was last updated on July 9, 2024.
This page is part of The Workplace AI Strategy Guide

This page is part of a step-by-step guide to Workplace AI strategy, which I'm currently in the process of writing. I'm creating it like an interactive online book. The full table of contents is on the left (or use the menu if you're on a mobile device).

What's this guide all about? Check out the intro or full table of contents.

Want to stay updated when this guide is updated or things are added? There's an RSS feed specifically for strategy guide content. RSS feed for guide updates .

This page is incomplete!

This page is part of my step-by-step guide to Workplace AI, which I'm in the process of writing. I'm doing it in the open, which allows people to see it and provide feedback early. However many of the pages are just initial brain dumps, bullets, random notes, and/or incomplete.

There's an overview of what I'm trying to accomplish on the "What is this site about?" page.

Want to stay updated pages get major updates or things are added? I have a news feed specifically for guide content here: RSS feed for guide updates .

Brian (July 2024)

In the previous chapter, we looked at the grand promises and marketing hype you hear from people selling workplace AI. Now let’s pivot to separate the facts from fiction, and look at the current reality of the technical capabilities of the AI products available for your employees to use. While it’s true that AI steadily improves month-by-month, and that any impressions you have from using products more than a few months ago probably don’t apply today, AI’s capabilities are still often overestimated and misunderstood.

This chapter provides an honest assessment of what AI can realistically do in today’s workplace, as well as where it falls short and where it flat-out fails. We’ll then explore the hype-versus-reality and common misconceptions about what AI can do today, and finally close by looking at how we reasonably expect this to change in the near future. By understanding AI’s realistic potential and constraints, you’ll be better equipped to make a realistic plan about its use in your own workplace.

A quick reminder: this information changes quickly! Whether you’re reading this online or in print form, check the date at the top of the page to see when it was last updated.

In this chapter:

  • What workplace AI is good at today
  • Where workplace AI falls short
  • Misconceptions about current products & capabilities
  • What’s going to change in the near future?

What workplace AI is good at (today)

Much of this guide so far has had a skeptical or almost negative tone towards AI. That’s largely on purpose, since the most important takeaway from my perspective is that you have a realistic understanding of what AI can realistically do, and there’s a lot of hype out there! So it’s about time that we look at the positive, actually real things AI can do in the workplace today!

Another quick reminder: This entire guide is about workplace AI, which is the term I made up to describe the AI-powered tools used directly by employees in their day-to-day office work. There are many existing and totally awesome uses for AI—bank fraud transaction analysis, genetics discovery, cancer screenings, etc.—which do not fall under the umbrella of workplace AI, so I won’t be discussing them here.

But when it comes to the workplace itself, AI is good at several things today.

Text & language processing

This is the branch of AI called “natural language processing” (NLP), and basically describes AI interacting with text. This is probably what most of us think about when we think about AI, because it includes technologies like ChatGPT or Microsoft Copilot.

The actual processing that can be done with text is virtually limitless: writing, editing, summarizing, translating, chatbots, etc. Many tools with many capabilities come from this.

Speech recognition

Technically these are two different things, but AI is good at listening to speech or audio and then transcribing that to text (which is usually then further acted upon by other text processing AIs.)

Speech generation

AI is also good at generating speech from text. This capability has technically existed for decades. (Fun fact: the TI Speak & Spell came out almost 50 years ago!) But recent advances sound incredibly life-like, with proper intonation, phrasing, accents, and breath sounds. (In fact it’s almost too good, as AI can clone any real human voice based on just a few seconds of audio, which bad actors use for nefarious acts.)

Data analysis

AI can process large datasets quickly, looking for patterns and insights.

Also looking at past data and predicting trends.

This is combined with others for things like personalization.

Image recognition

AI is good processing images and then categorizing what’s contained in the image. This is usually emitted as a text description which can also be further processed by a text-centric AI.

This capability can be used to process static images as well as videos.

Image generation

Sure, there are issues with weird chins and the wrong number of fingers. But all in all, AI is pretty good at generating images from text-based prompts.

Task automation

AI is good at automating repetitive tasks, especially if it’s kind of the same thing over and over. Combine with text processing and you get some good flexibility.

How these are combined and implemented

These interact and combine in fun ways.

Video processing, audio processing, text understanding, and data analysis mean that AI is good and editing and creating videos.

Don’t confuse the implementation with the core tech. They will get more local, integrated, and interactive over time.

Most is cloud today.

Where workplace AI falls short

There’s a lot that’s awesome, but also a lot which isn’t quite there yet.

Accuracy / no concept of what’s true or real

The problem isn’t that it’s wrong. It’s that the AI doesn’t know when it’s wrong. It doesn’t really understand the concept of wrong.

Hallucinations / BS Great paper, ChatGPT is B.S.

General intelligence

Lots of research is being done to know whether / when / if we’ll get there. AI is really cool today. It doesn’t appear to be actually smart.

Creativity*

A lot to unpack here. Based on what it’s seen before. A lot of people believe this is how humans learn, so AI is no different, so it is.

Context & Nuance

Especially for things it hasn’t seen before. It’s a statistics engine, so if a problem seems like something else that’s common and it’s seen 100 times before, it will lead you down that path. It’s not really thinking or considering everything. It’s just looking at what is the statistically most likely next text. So, “average”, which is probably fine for a lot of situations.

Long running tasks

It has a pretty short attention span.

Quality 80%

Be aware about the 80% percentile thing. This is not bad, but also you can’t run your whole company on this. (Or can you?)

Misconceptions about current products & capabilities

Sending mass emails

Thinking it’s wired in everywhere

AIs are not normal computers

They are different every time, a bit of randomness is purposefully added. The December dumb slow down. Why you need to say please and thank you.

The AI vendors are not normal enterprise vendors

There are a lot of 25 year olds trying to create AGI and become the first trillionaires. They don’t care as much about GDPR as you do.

In an interview from June 2024, OpenAI CTO Mira Murati (YouTube link) said:

Inside the labs we have these capable models, and they’re not that far ahead from what the public has access to for free. And that’s a completely different trajectory for bringing technology into the world that what we’ve seen historically. It’s a great opportunity because it brings people along. It gives them intuitive sense for the capabilities and risks and allows people to prepare for the advent of bringing advanced AI into the world.

(h/t Simon Willison)

This is a good point, but also scary!

It makes sense. These foundational models cost hundreds of millions of dollars to train and tune, the competition for the “best” model is fierce, and the shelf-life of the “latest” model is limited. So it makes sense that OpenAI doesn’t have GPT.next just sitting there, costing them billions of dollars of lost market value. Once this thing is built, they need to get it out into the market ASAP!

But think about that from an enterprise standpoint. Yes, it’s true, this is completely different than what we’ve seen historically. But that’s scary. OpenAI’s “great opportunity to bring people along” which “gives them the intuitive sense for the capabilities and risks” means that “the people” are figuring out what the risks are, rather than OpenAI. This is not how enterprise software is traditionally created or sold. (Just as Murati said.)

But will employees care about this? Do they know that the company that just pushed out the latest update did so before figuring out all the risks, with the expectation that the end user will figure it out?

It’s still relatively slow and expensive

People build all these awesome demos and then realize they’ll need to spend $1,000 a day in cloud fees to do work they could pay a human $200/day for.

We don’t know how AI “thinks” or why it does what it does

Understanding how/why AI did something

Alignment

AI is good at plagiarism

It’s good at generating things it’s seen before. One of the risks is accidental plagiarism.

AI needs a lot of training data

For a human, you might be able to say, “We write proposals in response to RFPs. Here are 4-5 we’ve done before. Get to it!” A human can be fine here, but AI cannot.

ROI takes time

Cost can be a lot. Training, it’s slow. It’s really hard and a lot of work to set up your own system. People have this idea “I want chatGPT for my own work” thinking you can point it at your Gmail archive and Dropbox and tell it “now become me.” Maybe some day! But not yet.

AI is not even just pretending to care

AI doesn’t care. It doesn’t pretend to care. It doesn’t know what pretending is.

Sounds good, but not real. Which is fine, lots of humans are like that too. But it doesn’t actually care, etc.

Maybe best said from one of my favorite movies as a kid:

Short Circuit, Tri-Star Pictures 1986

The next line in this movie is, “Err, usually runs programs.” :)

What’s coming soon?

There’s a chapter in the final section of this strategy guide which goes deep into planning for future AI improvements, so we’re not going to go too deep on that topic here. That said, the pace of improvement of workplace AI technologies is nuts. Quite literally, it’s improving faster than you can plan for it.

Seriously. Your own implementation of an AI product, project, or initiative is going to be technically obsolete by the time you get your project done. This is especially vexing because you’ll realize this part-way through your project and be tempted to take a step back to re-do some prior work, but if you do that, you’ll keep doing it forever and your project will never get done. (Heck, many of these products are cloud-based services which are updated often, so even if you try to lock-in your solutions, it’s moot since your provider will randomly change things ever after you make it to production.)

The way you handle this is to change your mindset about how you do AI projects. This is why so much of this guide focuses on how you prepare your company for the concepts and strategies of employee AI use, rather than the specific capabilities of specific products. There are several chapters covering how you actually create and implement your strategy later in this guide, and the fact that you can never “lock in” a specific product version or technology is core to all of them.

So, all that said, what’s “coming soon” almost doesn’t matter.

We know that the foundational models will continue to improve. Even “minor” version changes based on fine tuning and other runtime changes can have big impacts. (For example, OpenAI’s GPT-4, GPT-4 Turbo, and GPT-4o, while all based on the same “v4” foundational model, each showed impressive gains in capabilities along the way.) So you need to plan that the models will continue to slowly and steadily get better.

You also need to plan for models to randomly disappear. There are lots of unsettled legal issues surrounding many of these models. Not just how they were trained, but how they cite sources, how the various biases are disclosed, etc. Different jurisdictions have different rules and regulations which are also always changing. So be ready to wake up one day and randomly find that the AI tool you’ve been productively using for the past three months is suddenly no longer available.

The future will be about the glue and automations, not the core models.

Today most AI is in the cloud. There’s a trend to be on-device. Faster, possibly more secure or private.

Many new form factors will be swizzled together.

Conclusion

As is hopefully clear, the reality of workplace AI is more nuanced than the hyper might suggest. While AI has significant and impressive capabilities today, and has plenty of valid use cases in every workplace today, the leading visionaries and evangelists are still several steps ahead from what’s generally available to most people. So understanding the limitations is critical for building a successful plan for your company and employees.

In the next chapter, we’ll take a look at how employees are actually using AI in their day-to-day work, both with the full support of, and without the knowledge of their employers.