Artificial intelligence, if created and implemented responsibly, can help increase diverse, equitable, and inclusive recruitment and retention, DEI specialist Shayne Halls said during a University of Phoenix webinar Thursday.
Halls, president and CEO of Manifested Dreams, a company that connects organizations to AI services, spoke about the various ways AI can improve tasks and projects previously assigned to humans, removing human biases and time-intensive processes along the way.
“AI is not new. AI has been around for a very long time,” Halls said. “From Siri to self-driving cars, AI long became and is continuing to be an integral part of our daily lives. From the recommendation algorithms that curate our music playlists to chatbots that assist with customer service inquiries, these are all AI."
The event took place as part of University of Phoenix's Educational Equity Webinar Series.
It is human nature to have likes and dislikes, which lead to conscious or unconscious biases that may reflect in people’s decision-making when engaging in hiring processes. Even the most seemingly harmless conversations with coworkers – such as about vacation spots, sports teams, and shared experiences – can create kinship, and subsequently form biases, Halls said.
“Even if you have the best of efforts, the best of intentions, that doesn't stop you from being pulled into those conversations,” Halls said. “We've all been in the office when someone just comes by your desk and one conversation leads to another topic.”
AI offers a way to remove unconscious bias from the equation and create a more equitable hiring process. Systems can be programmed to ignore factors such as demographic information and assess applicant resumes more objectively, Hall claimed. And as chatbots, such systems can reduce the influence of external characteristics – appearance, accent, mannerisms – during job interviews.
AI can also help retain employees, Halls said. Instead of lengthy, multi-party, year-long waits to process employee engagement surveys and complaints, AI systems can provide significantly shortened turnaround times and offer dissatisfied employees faster support, he said.
"We're talking about now employees having a real feeling of being heard,” Halls said. “No longer are we sitting there, twiddling our fingers, thinking, 'Hey, I rose this issue up in this survey almost a year ago and it's still happening now.' That's where these AI systems can help retain these employees. Because if you feel like your voice is heard, you feel like you are actually being listened to, then you're going to want to stay at that organization."
Higher education and education in general can benefit from artificial intelligence aid as well, such as with AI-assisted data analysis; translation assistance; automation of administrative tasks; or predictive systems for student success and attrition.
With AI, educational curriculums can adapt to better fit the individual and specific needs that a certain student may have for their learning.
“With adaptive learning platforms, AI-powered tools – like DreamBox – personalize learning experiences for students,” Halls said. “These systems address individual strengths and weaknesses, adapting the curriculum to match each student's pace and each student's needs. Everyone learns differently. ... Everyone does not retain information in the same manner. AI is here to address that.”
However, although artificial intelligence does offer administrators and educators ways to lighten their loads and bolster their work, it’s important to acknowledge how susceptible these systems can be if not developed properly, Halls said.
AI systems are only as good as their creators, who have biases of their own. The programs can only be as unbiased as the data they are trained on, and historical data can come with its own skews and inequities. Data from an organization with a history of bias will produce a biased system, Halls said.
Checks and balances are needed in order for AI to be properly and ethically implemented, he said, suggesting training, diverse development teams, and regular audits for employed AI systems. Without such accountability measures, AI can and will perpetuate existing biases, Halls warned.
“You need to have a diverse [human resources] team that has one singular duty: focus on inputs and outputs of AI systems, ensuring that those inputs and outputs of the AI system are clean, and without bias and prejudice,” Halls said. “These persons cannot have multiple duties. Their only focus can be what the AI system is being given and what the AI system is giving to the workplace, into the organization."
Halls pointed to services as such IBM's AI Fairness 360 toolkit, which claims to help detect and reduce discrimination and bias in AI models. He likened artificial intelligence to “a genius six-year-old child,” capable of anything but also capable of being influenced to do anything.
Separating bias from AI systems is not an easy task, said Dr. Lee Cooper, director of Northwestern University’s Institute for Artificial Intelligence in Medicine.
"It needs to be done very carefully, on a case-by-case basis. There is not a general solution or panacea for de-biasing things,” Cooper said. “It's an AI question but also requires expertise about the specific problem you're trying to solve. So, it requires AI scientists and people who are experts in whatever the domain is that you're working in."
When searching for useful AI systems, investigate not just the tool, but the people who developed it as well, Halls advised.
"Don't just go on the website and look at the tool," Halls said. "Really take a second to look at the team that developed those tools, and make sure that the team is a diverse team."