Fuqua Professor Says Artificial Intelligence Needs Human Brainpower
Fuqua Professor Says Artificial Intelligence Needs Human Brainpower
Critical thinking is key to harnessing AI’s game-changing power, says Professor Saša Pekeč
AI-powered machines are already better than humans at performing certain tasks, but human judgment will make the difference in artificial intelligence's most critical applications.
“AI will help you in your decision-making. It won’t replace your brain,” said Professor Saša Pekeč of Duke University’s Fuqua School of Business.
In a talk on Fuqua’s LinkedIn page, Pekeč explained why the limitations inherent to AI make the critical thinking of managers and end-users all the more necessary to avoid the pitfalls of “binary outcomes” and “algorithmic bias,” while making the most of a technology that may revolutionize businesses and consumers’ digital lives.
Why machines are more efficient
The key to AI’s leap to the mainstream is the exponential growth of available data and processing power, Pekeč said. This, combined with scientific innovation, has allowed generative AI to bring artificial intelligence to the masses, “similarly to how Google search disrupted how people access content available on the Internet,” he said.
Computers have been more efficient than humans in “narrowly defined optimization and prediction tasks for at least a quarter of a century,” Pekeč said.
Consider targeted digital advertising. Machines can decide which ads to display to a particular user in milliseconds. “No human can do that, but machines are very good at it,” he said.
Further, advances in machine learning and AI allow for significant improvements in settings that rely on qualitative assessments, such as human resources, Pekeč said.
“Traditionally, the HR success rate in selecting the right candidate for a job opening had been close to 50%,” he said. “But in the last ten years, HR analytics tools that leverage unstructured data have significantly improved quality of HR decisions in hiring, career planning and talent management.”
How AI is empowering managers
“Staying away from AI is not an option,” Pekeč said. He added companies must embrace the technology to stay competitive.
While some access barriers remain—for example high startup costs for data access and processing—Pekeč said the technology has democratized the way managers can understand processes and manage people who have the technical expertise.
“Generative AI is now bypassing the technical lingo and expertise barriers,” Pekeč said, leveling the playing field between managers and their tech team.
Managers can utilize generative AI to understand what can and cannot be done and can better communicate with their software engineers and data analysts, he said.
Generative AI’s biggest improvement might be in coding, Pekeč said. As long as managers know how to ask the right questions, they can generate the code to solve a particular problem.
“The hottest programming language is English,” he said.
Risks and limitations of AI
AI is “eager to please,” Pekeč said. The machine will always come up with an answer, no matter how confident its algorithm is about it, he said, and the answer is binary, oblivious to nuances.
“It will never say, ‘I am 55% confident’ this is the right answer,’’ Pekeč said. “This could lead to AI amplifying extreme outcomes, which could be particularly problematic if you couple this with AI’s inability to assess the veracity and quality of the training data.”
This is particularly worrisome when the wisdom of the crowds becomes the vehicle of misinformation, he said.
“Just because content is produced en masse, doesn’t mean it is reliable,” Pekeč said.
Another limitation of AI is that it is backward-looking, limited by the data it has access to. “Let's say we are in 2019 and we have today’s AI tools,” Pekeč said. “Which AI tool would forecast that a global pandemic would occur within a year? With all its impacts for the global economy and for humanity? You can see how AI, by default, has a blind spot when it comes to ‘unknown unknowns.’”
Pekeč said another risk of relying on AI tools is algorithmic bias. All data analytics methods reduce uncertainty in their conclusions with more patterns and similarities in the data, he said, so the system aiming to minimize risks might go for more “ordinary” recommendations—for which there is a lot of similar data—and discriminate against less ordinary ones.
“This could lead to bias and discrimination against exceptional but ‘unusual’ candidates in hiring, for example,” Pekeč said.
People should be aware of such risks, and critical thinking is essential when relying on AI recommendations for decision-making, Pekeč said.
AI will support decision-making, not replace critical thinking
Pekeč said that like with any other “new shiny objects,” people need to know what they are trying to achieve with AI.
“You don’t use a technology just because everybody else is using it. This would be a recipe for disaster. You need to understand limitations, blind-spots, and pitfalls,” Pekeč said.
Further, understanding the reasons behind any AI recommendation is critical to be able to rely on it.
“Suppose you go to your annual physical and your physician—based on an AI analysis of the data—tells you, ‘Look, you need to have brain surgery right now.’ You would probably want clarity on why the machine is recommending a particular course of action.”
Pekeč believes AI is “a game-changer,” the latest and most powerful manifestation of the benefits of the digital revolution.
“However, a distinguishing characteristic of good decision-making is the ability to think critically, recognize nuances, and identify true expertise, resulting in actions that sometimes go against the common wisdom,” Pekeč said. “That ability is more important than ever, and remains critical when leveraging AI to supercharge our decision-making.”
This story may not be republished without permission from Duke University’s Fuqua School of Business. Please contact media-relations@fuqua.duke.edu for additional information.