Higher Ed HR in the Age of Artificial Intelligence: A Human-Centered Approach

Winter 2023-24
Julie Burrell

AI’s most astonishing feats have been enthusiastically publicized, from passing the bar exam, to salvaging a lost John Lennon vocal, and even resurrecting Vincent Van Gogh to chat with museum-goers. But AI’s more banal, routine uses will soon simply become part of what you do in HR, and a basic skill in any knowledge worker’s repertoire.

The good news? AI can’t replace the human in HR. The nuanced interpersonal skills you’ve honed and the very human decisions you make on a minute-by-minute basis won’t be replicable by AI anytime soon. The not-so-good news is that without firm AI guidance in place, the technology raises ethical and legal concerns. Just ask the lawyers fined $5000 for using fake case law fabricated by ChatGPT.

There’s no doubt that generative AI will shape the future of HR. In the shorter term, the HR pros interviewed for this article say that AI can help you upskill, maximize efficiency and automate repetitive tasks. But it’s critical to be realistic about what AI can and can’t do well, its inherent risks, and how some common-sense guardrails like an AI policy can ensure you are your team are using AI safely as well as productively.

What Is Generative AI?

Even if you’ve never taken ChatGPT for a spin, you’re likely already using AI in your daily life. Google searches, virtual assistants like Siri and Alexa, personalized Netflix recommendations, and computer chess games have been powered by traditional AI. But generative AI — the recent advancement that includes ChatGPT, DALL-E, Google’s Gemini, Meta’s Lama, and others — can actually make new content. Or, more accurately, they create new content based on existing work. Generative AI is trained on human-created art, writing, math, computer programming, music, etc. AI “learns” what this content should plausibly look, sound and act like by training on large models of human-created objects.

The good news? AI can’t replace the human in HR. The nuanced interpersonal skills you’ve honed and the very human decisions you make on a minute-by-minute basis won’t be replicable by AI anytime soon.

Generative AI is currently in the “peak of inflated expectations,” the high point of the Gartner hype cycle model, where both hopes and fears tend to be exaggerated. Remember the self-driving cars we were all supposed to be cruising around in by now? Those promises were made during the peak of inflated expectations for autonomous vehicles.

At its core, AI is a prediction machine. When you ask ChatGPT a question or enter a prompt, it predicts what the answer should be based on probability. (Sort of like Microsoft Word’s or iPhone’s autocomplete tool, but more sophisticated.) If you ask an AI chatbot a question — like “what’s the single most important thing that a new HR pro needs to know?” — it will statistically predict the answer based on the information it has been trained on. According to ChatGPT 3 (the free version of the chatbot), the most important thing a new HR pro needs to know is “understanding the importance of building and maintaining positive relationships.”

Is this what you would tell someone new to HR? Maybe. Maybe not. AI is trained to be probable, not to be accurate.

How Higher Ed HR Pros Are Using AI Now

Rahul Thadani, executive director of HR systems operations at the University of Alabama at Birmingham, says that AI is currently limited in what it can do for HR. Because of the inherent risks of AI such as bias and privacy issues, “its use in HR is limited to use cases where there is less of an impact on regulatory or compliance-driven functions,” Thadani says. “Until some of these risks of AI are managed or mitigated, the value added by AI will be limited in scope.”

But AI can automate and speed up some common HR tasks. Rhonda Beassie, associate vice president for people and procurement operations at Sam Houston State University, shares that she and her team are using AI for both increased productivity and upskilling, such as:

  • Creating first drafts of and benchmarking job descriptions.
  • Making flyers, announcements and other employee communications.
  • Designing training presentations, including images, text, flow and timing.
  • Training employees for deeper use of common software applications.
  • Providing instructions on developing and troubleshooting questions for macros and VLOOKUP in Microsoft Excel.
  • Troubleshooting and creating reports pulled from their enterprise system.

AI is particularly helpful for newer employees, who can troubleshoot software and enterprise reports on their own with it, Beassie notes. “They can simply say to the AI, ‘I received an error message of X. How do I need to change the script to correct this?’ and options are provided.”

Recruitment and hiring are other areas with potential AI applications, says Tony Sanchez, chief human resources officer at the University of North Texas at Dallas. “It helped me staff a unit in an aggressive time frame,” Sanchez said. “AI parsed resumes, prescreened applicants, and allowed scheduling directly to the hiring manager’s calendar.” He estimates that AI recruitment sped up the process significantly, with the hiring manager seeing a qualified candidate “in the same day as the applicant applied, versus week(s) for a recruiter to review resumes via the applicant tracking system.” For this, he used Paradox AI software, not a free tool.

Sanchez stresses that choosing your AI tools, like recruitment software, “is where a strong partnership and early involvement with an organization’s IT department is crucial. It is to everyone’s benefit to invite the IT subject matter experts into the vetting, selection, and ongoing monitoring process of the AI to ensure the AI company’s data security processes are safe and ensure the organization and its members are protected from breaches and collection of or misuse of the organization’s information.”

Thadani also sees potential in AI’s data analysis capabilities, including aggregating and interpreting data on employee engagement collected through surveys and performance reviews. But he stresses that human behavior is too complex to be contained solely within data sets, and therefore human decision-making and oversight is needed.

The Risks of Using AI

Beassie also identified several risks associated with AI. She worries that overreliance on AI may “dull employee writing and critical thinking skills.” She also fears that “employees won’t understand and/or respect that publicly available generative AI systems are not confidential.”

Developing or updating an in-house AI policy document is a must, and it should be guided by an understanding of the risks inherent in using AI, including:

Bias and Fairness. Because AI is trained on and by humans, AI will replicate existing biases, unintentionally or not, which may have legal and reputational risks. The Equal Employment Opportunity Commission recently sued a tutoring services group for using AI to screen out older applicants, costing the company $365,000.

Also problematic, but less obvious, are thought and availability biases. Katie Conrad, Professor of English at the University of Kansas, warns that AI output represents a bell curve by spitting out “plausible output” that sits at that curve’s center. This means it cuts off the tail ends of the bell, representing possibly more diverse thought.

AI also trains on what’s available — it scrapes the internet for content — which means that if it’s not online, AI likely won’t know it. AI favors pop culture and marketing speak, rather than, say, materials in archives or the art of non-western cultures. AI will also use the self-promotional language of business, which litters the internet, and disguise it as factual information.

Resource
See the EEOC’s technical assistance document on preventing AI-related discrimination under Title VII.

Making Up Facts. Generative AI isn’t great at providing accurate information. Again, it’s trained to be probable, not accurate, and so it fabricates answers. These “hallucinations,” as they’re often called, appear real precisely because they’re plausible. Most chatbots are much more valuable as a writing and analytical tool, than as a search engine with a list of results you can scan for credible, trusted sources.

When writing this story, for example, I asked Google’s chatbot Bard (now called Gemini) to supply me with information on colleges and universities that are using AI for HR applications. It supplied a very realistic-looking list, but when I did some research, I couldn’t verify a single example. When I asked Bard to supply me with the sources it used to compile the list, the sites had little or nothing to do with higher ed HR.

If you’re using AI to compile information, you should either have expertise or consult someone with expertise in that field. Experts will be able to spot the real from the fake. Or, you should be willing to spend some time verifying the info AI has provided. As Conrad explains, AI is not a substitute for expertise. If you cannot evaluate AI’s output with the eye of an expert, you shouldn’t use it to make informed decisions.

Resource
Perplexity AI is better than most AI at search because it’s trained to use real, credible sources and cite them clearly so they’re easy to review.

Data Privacy. AI has been called a “data privacy nightmare” for its use of our personal information without consent. But most concerning for HR is that information you enter into an AI chatbot is not private. Proprietary, sensitive or legally disallowed information should not be entered as a prompt. Assume everything you share with a chatbot will be made publicly available.

Be familiar with AI tools’ privacy policies. Though the most recent privacy statement by OpenAI (maker of ChatGPT and Dall-E) says they’re “strongly committed to keeping secure any information we obtain from you or about you,” caveats abound in the fine print, detailing how and when they can and will share your personal information.

Assume everything you share with a chatbot will be made publicly available.

Also consider: What are your state’s laws on data privacy? Are there emerging regulations, like those being considered by California?  What multi-state workforce issues might complicate your use of AI? There will also soon be federal guidelines in place spurred by the Biden administration’s recent executive order on AI.

Alienating Your Audience. Consider how your audience might feel if they found out something you composed or designed was created by AI, especially if you didn’t disclose this. Depending on the context — text for a sensitive message or the background used in a promotional photo, for example — it might feel cold, uncaring, or even deceptive. Be sure to err on the side of transparency.

Effects Beyond HR. The use of AI also has ethical, legal, moral and environmental implications for its users that go beyond human resources, including contributions to the climate crisis and extractive mining labor exploitation; digital sweatshop labor; risks to society posed by open source AI; governmental development of autonomous weapons; copyright infringement; and political misinformation, which we will surely see impact the upcoming Presidential election. AI can even replicate the voice or likeness of your loved ones and celebrities, leading to possible fraud. Even Tom Hanks isn’t safe from fakes.

Six Tips for Using AI Safely

When I asked Tony Sanchez about AI’s potential pitfalls, he warned against “not using AI at all.” To be sure, not using AI comes with its own set of problems — largely, falling behind on emerging technology and losing out on its productivity gains. Here are six ways to safely use AI without discouraging experimentation.

  1. Have Lots of Conversations About AI. Talk to your team: Are they already using AI? How? Take the temperature of your department and colleagues. Has anyone become well versed in AI who can act as a point person or emerging AI leader on campus? AI is developing rapidly, so keep having these casual conversations with as many people as possible. Include IT in these discussions. The relationship between the chief HR officer and the chief technology officer is crucial to making AI work for your needs, both in ensuring compliance and when choosing vendors that rely on AI tech. As HR-specific AI-driven tools continue to come on the market, promising assistance with job functions from performance reviews to applicant tracking, it’s critical to review any new software with an eye on legality and compliance.
  2. Form an AI Council. An AI council or working group that’s broadly cross-functional across campus areas and departments can help assess how AI is and will be used ethically, effectively and safely. Ideally, AI implementation is collaborative rather than a top-down process.
  3. Create a Living AI Policy Document. One of the most important functions of an AI council is creating and updating guidelines around AI usage and continually improving them. These can be university-wide, with modifications for each unit or department. Some policy recommendations you might consider are:
    • Establish when AI usage is acceptable, who can use it, and in what circumstances. Consider how this will vary depending on the area and department.
    • Create an established vetting process when creating or designing with AI, including a requirement to never send anything out of an office that hasn’t been edited by humans.
    • Clarify what information employees are allowed to enter as prompts, especially in publicly available chatbots. These should include tight guidelines around privacy and any legal considerations impacting inputting personnel and/or student data.
    • Consider ways to combat biases inherent in AI.
    • Designate AI point person(s) whom employees can contact with questions.
    • Include a content authenticity statement noting whether something was composed or designed using AI. This is especially helpful in noting when AI-generated transcripts of meetings or videos may not be accurate.
    • Decide on consequences for violations of these policies, with the caveat that a robust AI literacy program should be considered for anyone using AI.
    • Revise your guidelines often. Because of the break-neck pace of AI development, your guidelines should be updated often, using feedback from stakeholders.
    • Tie these guidelines to your institutional values and mission.
Resource
Take a look at the University of Michigan’s AI guidelines, broken up into staff, faculty and student categories.
  1. Encourage and Reward Experimentation. For example, Sam Houston State University is piloting an AI project with its HR team. “An employee with an idea on how AI might improve a process is given time and access to work on the idea, and then presents results to an ever-expanding group,” Beassie says, with all HR staff invited to give feedback. They’re still considering how to recognize and reward these employees, whether with gift cards or extra vacation time, depending on the value of the contribution. In their AI action plan, Educause recommends forming a community of practice where professional staff, faculty and students “collaborate and experiment with AI tools and applications without risk to production systems” by using an AI sandbox.
  2. Train Your Staff in Using AI. AI is now a critical skill for any knowledge worker, especially its legal and ethical pitfalls. AI literacy may also soon become a central value in higher education.
  3. Think Creatively About Inclusion and Belonging. AI-powered tools, like automated text-to-speech and speech-to-text may help further your equity goals when used with care. Think outside the box when it comes to how AI might help employees connect across differences or upskill their inclusive practices. For example, Jon Humiston of Central Michigan University recommends AI to help you become more confident using they/them pronouns. To support autistic job applicants, use AI to revise job interview questions from abstract to concrete.

Get Started, and Keep Going, With AI

Many functionalities of generative AI will take time to emerge, since developers are and will be releasing new and updated versions of software to take advantage of AI (e.g., Microsoft Copilot and Adobe Firefly). AI will be “baked into future releases of ERP and other business products,” Thadani notes. “This will be a slower and more deliberate adoption process that will take time, commitment, and resources just like any new technology that comes along.” In the meantime, here are ways to take advantage of what AI can do now.

AI Is Your Time-Saving Assistant. The HR pros I interviewed for this story all emphasized AI’s potential for time saving. AI isn’t a replacement for you or your team, but a strategic partner in:

  • Brainstorming and outlining.
  • Composing summaries of longer text.
  • Writing job descriptions and creating interview questions.
  • Suggesting ways to improve your writing or marketing.
  • Acting as a critic, such as playing devil’s advocate for your argument or noting elements you might have missed.

Sanchez also sees potential in AI being trained in frontline work like “answering common questions such as basic benefit and recruitment processes,” allowing “for staff to grow into other roles such as benefits coordinator or analyst positions.” Consider the time staff would have for management coaching and other consultations if routine questions were answered by a chatbot.

Resource
The University of Virginia’s frontline HR chatbot is a good example. (Here’s how it introduces itself: “Hi there! I’m CavBot, and I am here to answer your HR-related questions such as benefits, leave, payroll, and more.”)

AI Is Your Coach. According to Thadani, AI offers “a tremendous opportunity for employees to upskill themselves to learn to leverage the power of generative AI to become more productive (both in their work and personal lives), while also continuing to grow their knowledge and skills.” Employees can upskill using AI as an interactive private tutor that aids in problem-solving, he notes. “For example, if you are trying to create a pivot table or summarization in Excel, you simply ask it.” If you need it to explain further or do a deep dive into a specific problem, you can ask AI to provide more details. “So instead of having to click through various websites and searches, it’s a single source for a dialogue specifically tailored to your questions. It makes the learning more interactive, engaging and less daunting.”

AI Is Only as Good as the Prompts You Write. Perhaps the most important skill you can learn now is prompt literacy.

What are prompts? Prompts are your input into the chatbot or AI tool, the question you search for or the tasks you ask it to complete. While AI appears easy to use because we can interact with it using everyday language, prompt writing is a complex endeavor. That’s why prompt engineers exist, whose job it is to design and refine them.

How do I write prompts? Be specific, give context, and ask the AI to act “as if.” For example, “You are a human resources professional at a small, liberal arts college. You are writing a job description for an HR generalist. The position’s responsibilities include leading safety and compliance training; assisting with payroll; conducting background checks; troubleshooting employee questions in person and virtually. The qualifications for the job are one to two years in an HR office, preferably in higher education, and a BA.”

You can also ask for certain tones or genres — chatbots are really good at this! “You are an HR professional writing a holiday message to your employees. Please write a friendly, professional happy holiday message in the style of ‘Twas the Night Before Christmas.’” (A sample: “And I in my HR role, with my inbox on mute/Had just settled down for a long winter’s reboot.”)

Resource
Harvard’s IT primer on prompts may be helpful as you develop your own.

Iterate your prompts. Your first, second or even third try might not give you the results you’re looking for. Don’t give up. Keep tweaking, using synonyms (especially for verbs) or different phrasing. Ask the bot itself to help you craft a prompt, or do a web search and try out some free online prompt generators.

Give feedback. You can critique the bot’s output. For example, “be more concise,” “give me multiples options to rephrase this” or “add in more detail having to do with good communication skills in the bullet points about job responsibilities.”

Save your prompts. When you’ve found a prompt that works well, save it in a prompt library. Consider sharing this in a Google doc or other cloud software so your team has access to them, can add their own, and refine the shared prompt library.

Conduct a prompt workshop. Set aside time with your team to experiment and refine prompts and share the results. Try different bots (ChatGPT, Gemini, Lama, etc.) using the same prompt to see how each one responds. What are the pros and cons? Continue using different AI tools, as they’re constantly refining their training data.

AI and Your Future

The HR pros I interviewed all ultimately see great potential in AI freeing up time for employees to focus on career development and service to the campus. Sanchez expects that AI will “replace time consuming in-person training with the use of avatars or gaming-like 3D training.”

Beassie hopes that “routine tasks will be significantly diminished, and more time can be spent on serving the institution with greater data analysis and workforce planning and serving our employees with individualized development and career planning and expanded wellness opportunities.”

Most AI experts believe that in the future, AI will simply be infrastructure, like electricity or running water.

Resource
CUPA-HR’s AI in Higher Education HR toolkit includes best practices, links to resources and examples of AI policies.

 

Please note: On April 29, some website services may be unavailable while we upgrade to a new system.