Are Employees Using AI Without Management’s Knowledge? Yes!

With the ever-increasing number of options becoming available with each passing day, the answer to this question can almost be considered rhetorical.

Between ChatGPT, Microsoft CoPilot, Grok, Claude, Google Gemini, or AI tools on Zoom and countless other apps, it’s a near certainty that there are people in your company using AI without management’s knowledge.

It’s similar to parables in the Bible about sheep

When left to our own devices (no pun intended!), we will wander…or undertake things that could end up causing harm to ourselves or others, or in the context of today’s subject, harm to our company.

In the case of unauthorized use of AI in the workplace, there are dozens of statistics out there, such as:

  • 75% of global knowledge workers are using AI tools.
  • 78% of these individuals are bringing their own tools.1

 

These statistics certainly show the depth of adoption, because…let’s be honest – it’s no surprise employees throughout the organization are using AI tools.

While I’m not trying to sound cliché, it’s true that the possibilities are (almost) endless.

A couple of everyday examples can include:

  1. The CEO’s assistant is using the Zoom AI tool to summarize the latest executive-team meeting.
  2. Someone in Purchasing has downloaded Claude and developed a new dashboard instead of the usual monthly report.

 

Most of the usage according to a new Gallup survey focus on using AI tools to generate ideas, consolidate information, automate basic tasks, but in some cases, employees are using it to make predictions, or set up, monitor, or operate complex equipment or devices.

Now this isn’t our first foray into the topic of the risks of using AI.

In this previous article announcing our AI eBook for the insurance industry for example, we explain how ERM should not be a naysayer but instead an enabler that helps the company benefit from AI while addressing risks in the most meaningful way.

What about oversight of AI usage?

The following results from a survey by EisnerApner shared in this article by Norman Marks shows how so many ‘sheep’ are truly using AI tools without a shepherd.

  • Only 36% of respondents say their company has a formal AI policy in place.
  • Only 22% say their company actively monitors AI usage.

 

Talk about brewing storm clouds…

Similar to launching a new product or initiative on a whim, many companies get an extra dose of shiny-object syndrome when it comes to fancy tech tools. With all the buzz around AI, it’s easy to see how many will start using it without considering downstream implications.

And like the ‘sheep’ the Bible references, those who wander like this and use AI tools without their company’s knowledge or consent generally mean no harm.

A quote from Michael Rasmussen shared in this previous article on good AI governance illustrates this well when he says:

The problem with AI is not simply that it is powerful. The problem is that organizations can enter the labyrinth too casually.

Without clear guardrails in place that are enforced, it won’t take much for a well-intentioned employee to create all sorts of headaches.

There is plenty of material on the risks of biased data, drift, and the hallucination effect in multiple places, so I won’t go into them here.

But there are two specific risks that I think need to be on the radar of risk professionals as they help their organizations develop better governance of these tools.

The first is liability…

As discussed in a previous article on Board oversight, absent gross negligence, Boards were not held liable for any impacts risks could have on the company, shareholders, employees, customers, or the general public.

In recent years, thanks to court decisions and rule changes by regulators, Boards are increasingly being held liable for risks that materialize. No longer can they say they didn’t know or cast blame on a third-party tool. While not an example based on AI, this article provides a good compare/contrast.

Especially in the case of large language models (LLMs), the risk of hallucination, or the tool making things up is very real. If some output contains erroneous information because an AI tool ‘hallucinated,’ the Board could be held liable, even personally.

The second risk that really deserves attention is the security of your company data and any proprietary information.

Hackers have been a fact of life for decades, but in the world of LLMs and AI tools like the ones mentioned above, the question must be asked:

  • Are people taking all the right steps?
  • Are they considering what data the bots have access to?

 

Without proper safeguards and rules in place, it’s very easy for any company data or trade secrets stored on the employee’s machine to end up training the model.

…or even worse.

I’ve heard of trade secret intellectual property and internal process documents being stolen and either used against the company for a ransom or using it to set up another firm as a direct competitor.

The possibilities of AI, especially the fast-growing generative AI, are literally endless.

But so are the risks or possibilities of harm to the company, which is why robust AI governance is necessary.

Because without clear rules and guardrails that are enforced, it’s literally the wild wild West. The ‘sheep,’ in this case the company employees, will certainly wander, which can be dangerous in the end.

Check out previous articles exploring what proper AI governance should look like, one method for evaluating innovative tools like AI, plus why it’s okay to not be the first to adopt something to learn more.

Do you know people using AI without their company’s knowledge or authorization?

Join the conversation on LinkedIn to share your thoughts.

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Receive Our Weekly Blog Updates

Meet Carol Williams, SDS Founder & Lead Strategist

To our readers:

This blog was launched to provide strategy and risk practitioners with a go-to resource to better guide their efforts within their companies. Thank you for bringing me and my team along to be part of your journey towards better risk management, strategic planning and execution, and overall decision-making. Happy reading!

Find more SDS Insights