fbpx
Home » Blog » People Management » A new frontier: What HR teams should know about AI in the workplace

A new frontier: What HR teams should know about AI in the workplace



Let’s start with the elephant in the room: Your employees are using artificial intelligence—in fact, they probably have been for a while. 

ChatGPT may be seizing hearts and headlines with an unprecedented adoption rate. But it’s hardly the first tool of its kind. Simplifying labor through human-engineered automation is nothing new. But adoption almost always outpaces awareness. Companies are playing perpetual catch-up when it comes to understanding not only who’s using AI tools, but the ramifications of how they’re using them.

As an HR or People Operations professional, it can make for a particularly complicated people problem (as if you don’t already have enough on your plate).

Join 10,000 companies solving the most complex people problems with PI.

Hire the right people, inspire their best work, design dream teams, and sustain engagement for the long haul.

AI tools are not inherently dangerous or threatening to your business. But people are inherently unpredictable, not always adhering to the maxim “with great power comes great responsibility.” We often don’t immediately comprehend the power we possess when we start using new technology. 

So how, then, do you go about the essential task of understanding your exposures and creating clear guidelines for use, while also safeguarding your company’s assets? It’s a complex undertaking—one that prompts more additional questions than immediate answers. 

In this post, we’ll explore common AI-in-the-hands-of-humans questions, such as:

  • What tools are your employees using—and how are they using them?
  • What can companies do to protect their employees and their investments?
  • What can you do as an HR professional?
  • What else don’t we know about AI?

What tools are your employees using—and how are they using them?

Let’s start with ChatGPT. Sorry to break it to you, but at least some of your people are using it in some capacity. And while its adoption rate far outpaces its peers, it’s hardly the lone AI tool people are experimenting with. Jasper Chat, GetGenie, and Midjourney have made their own marks, and represent just a few of the names worth knowing. 

We may not yet know which of these tools will sustain success, but one thing’s clear: AI’s not going anywhere. Start by acknowledging that truth, and you can better assess the follow-up questions it begs. 

Maybe the most pressing question: Are your employees using these platforms exclusively on personal devices, or on a range of equipment? Security stakes can vary wildly depending on the answer. 

What starts as innocuous use can quickly become problematic if people aren’t careful about the information they divulge. If your employees are pasting proprietary or confidential information into the ChatGPT platform, they could be exposing the organization to a range of unknowns. That’s one reason high-profile, geographically dispersed businesses such as Samsung have imposed restrictions on employee use of AI—at least until further notice. 

Samsung called its move temporary, highlighting the evolving nature of its awareness. In advising employees to “not share any sensitive information” in conversations with OpenAI tools, the tech conglomerate seemed to be stating the obvious. It was a directive open to a lot of interpretation. 

The fact is, we’re dealing with widely uninterpreted territory. Until we know the full extent of the interactions between users and the source, it’s prudent to be proactive. 

You can trust your people while still safeguarding against their own relative unfamiliarity with the tools they’re using. That’s just sound risk assessment. And any business trying to wrap its arms around AI use should err on the side of caution.

Austin Fossey

What can companies do to protect their employees and their investments?

Your employees are an investment, despite the mutual exclusivity this question might imply. In fact, people are arguably your most important asset

If you approach their protection as a matter of safeguarding intellectual property, you might have an easier time selling any initial restrictions or policy implementation. Depending on your industry, company size, and ambitions, you could stand to gain more than you lose by leveraging AI tools tactically. In that case, your stance could be less about AI’s dangers, and more about your ongoing assessment of the tools as they apply to your business. 

You can use contemporary examples—such as that of the Samsung staff who leaked internal source code to ChatGPT—to bolster your stance without fear-mongering. 

In a recent interview with CNBC, Suresh Venkatasubramanian, former advisor to the White House Office of Science and Technology Policy, laid out five basic guardrails for companies seeking to establish AI oversight. He suggests organizations focus on:

  • Testing AI products
  • Ensuring data privacy
  • Protecting against bias in algorithms
  • Requiring user disclosure with automated products
  • Allowing employees AI opt-outs in favor of human alternatives

Remove the term “AI” from this equation, and you realize these guardrails aren’t unlike those you’d put in place for any other new program or platform you consider adopting. Due diligence always entails testing new tools, protecting your data, and giving employees as much information as possible. 

Of course, your organization may not have the resources available to cover all these priorities at once. But you can and should assess where your exposures lie in relation to each of the above. And then, communicate a plan (however temporary) to your people accordingly. 

What can you do as an HR professional?

If you feel overwhelmed by the concept of offering guardrails for AI oversight, that’s understandable. As an HR rep, you’re in the people business, and people alone can create plenty of conflict, tension, and exposure. Throw robots into the equation and you might quickly feel like you’re in over your head. 

But when it comes to AI in the workplace, HR’s job isn’t necessarily to assess or analyze exposure, so much as to help alleviate it through education and, at times, enforcement. That could mean playing a critical role in communicating conditions for employees’ AI use. Or it could mean bringing in external experts to enhance everyone’s understanding of proper use, and any risks at hand. 

Use your established communication channels—email, Slack, Teams, or company meetings—to deliver pertinent information just as you would with any other business initiative or change to company policy. As you do so, encourage feedback and dialogue. 

Your employees will appreciate being treated as human beings, even on matters of decidedly non-human nature, such as AI use. You can acknowledge the prevalence and even the appeal of leveraging these tools while still preaching caution in the name of the company’s collective health. 

You can also find creative ways to promote where and how employees use AI tools for the better. In the case of ChatGPT, there are plenty of advantages to be gained, beyond simply writing more efficiently:

What else don’t we know?

Quite frankly, there’s a lot we still don’t know. And therein lies the fear driving many business actions when it comes to protecting people—and their profits—from AI exposure. The fear of the unknown can be very powerful.

It doesn’t help when figures like Geoffrey Hinton, the so-called “Godfather of AI” and a leading mind behind technology like Google’s Bard, call the situation “scary.” 

It might sound dramatic when Hinton says things like he’s “regretting his life’s work.” But his wariness is mostly related to a lack of current regulation, and the rapid nature with which new technologies are emerging. Innovation is outpacing public understanding, which predictably leads to some panic. 

What we do know is that companies can influence the “ethics” of AI use, particularly when it comes to competition in this blue-ocean space. In a response to an interview piece with the New York Times, Hinton was careful to clarify the terms of his departure from Google. He tweeted:

“In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”

To strike the right ethical balance, Hinton and others believe, it’s on the companies developing AI to consider not just how their technology might impact the world, but also their own house. Getting ahead can’t supersede proper use.

As we continue to unpack the ramifications of regular AI use—the “who,” “where,” and “for what” of employee interactions—we’ll get a better sense of what’s appropriate and what constitutes endangerment. 

Employees will respond to education. But only if the education is delivered in the context of how responsible AI use can help them perform their jobs (not take them), and how, in doing so, they can help keep their company safe.

Individualist

Andy is a content writer and editor at PI. He's an unashamed map geek, hoops enthusiast, and Goldfish cracker aficionado.

View all articles
Copy link