Leadership & Management

Managers and the AI Challenge

Tanmoy Goswami

Tanmoy Goswami

December 17, 2023 11 min read

Listen to this article

We need to dig beyond the simplistic stereotype of people terrified that AI will ‘steal their jobs’, and seek a nuanced understanding of the psychological impact of AI in the workplace. 

Twelve years ago, Silicon Valley venture capitalist Marc Andreessen coined a slogan that would reverberate at buzzy events where industry insiders gathered to gush at the exciting future of technology:

Software is eating the world.”

Andreessen was referring to the rise of newfangled internet companies like Facebook, Twitter, and Amazon. While some were worried that this was just another bubble (a la Webvan and pets.com), Andreessen held that “we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy.”

Well. If the birth of a bunch of platforms that let you post photos of your lunch, rant about your bank’s bad customer service, or order T-shirts online was proof that software was eating the world, then the current bedlam — this time around artificial intelligence — would suggest that the world as of today is buried deep in software’s bowels.

As you read this, entire industries, cultures, and ways of life are in the process of being digested by AI engines hungry for data. The outcome of this metabolism will be every bit as “dramatic” and “broad” as the internet revolution of the previous generation — except on steroids.

How this moment makes you feel in your gut depends on whether you believe that AI is an evil force that will destroy humanity and render us all jobless, or, like Andreessen, you are a votary of the “techno-capital machine, the engine of perpetual material creation, growth, and abundance.” But even as we participate in such philosophical wrangling on the civilizational influence of AI, we must also make it our priority to ask other, more urgent, more grounded questions.

Questions such as: What are the people in the trenches thinking about the ongoing onslaught of AI, and how are they responding to it, in the here and now, in the real world and in real time?

I am talking about the average manager (or any employee, really) listening to their CEO’s grand vision of business in the age of AI and trying to decode what it means for them. Work is one of the most emotive elements shaping our identity. The unprecedented chaos, confusion, and uncertainty wrought by the rise of AI means that it is vital to make sense of the psychological landscape it is creating in its trail: What is all this hype and frenzy doing to organisations’ cadres? What are their greatest hopes and anxieties around AI? How is the prevalent discourse affecting their belief in technology, their relationship with work, and their self-perception as professionals, and how might this influence organisational policies? Are we asking these questions as much as we should?

Short answer: No. And that’s a problem.

A brief history of disruption

Like with every big technological wave before it, AI is profoundly changing what it means to be a (productive) human. 

Dr Rishikesha Krishnan, director at IIM Bangalore and Ram Charan Chair in Innovation and Leadership at the institute, points out that the dominant thread in the zeitgeist today “goes back a few hundred years, and that is the thread around productivity improvement. Think back to the industrial revolution, which is when the focus on improving human productivity started with the invention of machines, such as in the cotton textile industry and so on. Then came the steam and electricity revolutions. Fast forward to the 20th century, and you enter the age of software. AI is the latest manifestation of this story, which promises three core benefits: improving efficiency, boosting productivity, and reducing cost.”

At the heart of each of these gains is one of AI’s pivotal, foundational promises: helping leaders make better decisions.

Research led by Dr Guangming Cao, head of the Digital Transformation Research Center at Ajman University in the UAE, lays out that the history of AI in decision-making can be divided into two broad phases. The first phase began in the mid-to late 1970s, peaking at the start of the 1990s, when “expert systems”, specifically proposed for decision-making, were intended to replicate the performance of a skilled human decision maker. One of the earliest examples of this was MYCIN, an expert system developed at Stanford University which diagnosed blood infections and recommended appropriate medical treatment.

We are now in the middle of the second phase, which began around the turn of the millennium. AI use in decision-making was intermittent during the 2000s, Cao et al point out, but in the past decade its playing field has expanded rapidly, thanks to research on deep learning systems.

What is deep learning?

Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behaviour of the human brain — albeit far from matching its ability — allowing it to ‘learn’ from large amounts of data. Deep learning drives many everyday products and services (such as digital assistants, voice-enabled TV remotes, and credit card fraud detection) as well as emerging technologies (such as self-driving cars).

Source: IBM

Unpacking managers’ attitudes towards AI

Across this dynamic history spanning half a century, there has been little effort to understand managers’ attitudes towards AI. While there’s increasing conversation on the technical accuracy, potential value, and data availability with respect to AI, we don’t know enough about the mental makeup of the human actors who are supposed to use this powerful tool.

There is very limited empirical research focusing on understanding managers’ attitudes and behavioural intentions towards using AI from a human-centred perspective, Cao et al write. We lack clarity on if and when people are willing to cooperate with machines, although conditions favouring IT acceptance have long been seen as a central pillar in research into IT innovations.

The researchers point at the obvious reason this is perilous: The potential benefit of human-AI symbiosis in organisational decision-making can only be fully realised if human decision makers accept the use of AI. 

You could argue that AI is an unstoppable force, and ultimately everyone will have to make peace with whatever it brings. But in the ideal world, no organisation should have their people fall in line kicking and screaming (an area where far too many businesses have an inglorious track record). To avoid causing mass distress and creating pandemonium, it is essential to closely understand people’s mindsets and design compassionate, human-centred interventions.

The “AI will steal jobs” narrative

In the absence of granular insights, the media and popular culture have remained saturated with the same old stereotype: of people terrified that AI will ‘steal their jobs’.

This is a valid concern, of course. “Work will get reorganised, and roles will change,” says Dr Krishnan. “It is reasonable to expect that at least in the short run, the number of jobs will go down, including certain kinds of managerial jobs.” Some comfort comes from the prediction that new jobs will also get created, but the dominant narrative is one of fear.

However, this is a simplistic and one-dimensional reading. It prevents us from getting a nuanced picture of sentiments on the ground. In fact, it could be leading us astray by glossing over crucial contradictions.

In June 2023, BCG published results from one of the few comprehensive surveys of workplace attitudes towards AI. It reached 13,000 people, from executive suite leaders to middle managers and frontline employees, in 18 countries to understand their thoughts, emotions, and fears about AI.

The big revelation? Fifty-two percent of respondents were optimistic rather than concerned about AI, a significant bump up from 35% last year.

Time to throw caution to the winds? Not so fast.

The same survey discovered that leaders were much more optimistic about AI than frontline employees (62% vs. 42%). Also, regular users of generative AI (ChatGPT being the most common example) were a lot more bullish than nonusers (62% vs. 36%).

Let’s zoom in a little. The dissonance between managers’ and frontline employees’ attitude to major work trends isn’t new. Most recently, we have seen it play out in the work from home debate, with bosses being gung-ho about the return to office and issuing unilateral diktats to this effect, and employees feeling understandably bitter and let down. What can this ‘optimism gap’ teach managers about AI adoption at scale? Will it push them to be more collaborative in policy decisions to minimise friction?

In their paper, Cao et al hint at a deeper reason to be conservative about managers’ apparent optimism. Citing research led by Professor Aaron C Elkins, an expert on management information systems, they argue that this optimism may be punctured when human experts feel threatened by AI systems that contradict their own judgements:

“When asked about new technologies, experts in deception detection are very enthusiastic and interested in new tools and technology. However, when confronted with the actual technology, they reject its use and ignore it all together.”

The second data point from the BCG survey, on the difference in optimism between users and nonusers, raises other critical questions — such as who has the privilege to get ‘regular’ access to generative AI tools in the first place, and who doesn’t? What socioeconomic factors determine this access? And what role do leaders have in mitigating this gap and creating more egalitarian access?

If we don’t engage with these questions, we run the risk of perpetuating the digital divide we saw in earlier eras that kept out historically marginalised groups, this time with potentially more damaging implications.

Responsible use of AI — the big, understated worry?

Even as mainstream narratives make it seem that workers are only preoccupied with the impact of AI on their livelihoods, the BCG survey indicates that they care about something bigger: responsible and ethical use of AI.

While 71% of respondents believe that the rewards of generative AI outweigh the risks, 79% support AI regulation.

“This represents a marked shift in attitude toward government oversight of technology,” BCG says. “During the early days of the Internet, a laissez-faire, light-touch ethos prevailed. Today, employees are more willing to acknowledge that government can play a constructive role in overseeing a relatively new commercial technology.” (Whether governments will do their job well is another story.)

Many companies claim that they are taking AI safety seriously. But once again, not everyone within organisations is buying it: Among leaders, 68% believe that their organisation has an adequate responsible AI program in place. The figure among frontline employees is a measly 29%, underscoring an alarming trust gap.

Conversations on AI in the workplace will remain shallow and misleading as long as we don’t ask questions about these fundamental issues, how leaders are responding to them, and how those responses are shaping the future of this wondrous “techno-capital machine”.

Using AI for decision making: Three elements to remember

  1. Human centred approach: Humans and AI form a unique partnership and cannot be treated as separate entities in order to make the partnership work. Human perceptions, concerns, and attitudes must be front and centre in policy design.
  2. Inclusion of both technology acceptance and avoidance factors: As using AI for organisational decision-making has the potential to create both positive and negative impacts, that could influence managers’ attitudes and behavioural intentions to either accept or avoid using AI.
  3. Factors related to personal concerns: Using AI for organisational decision-making may raise serious concerns among managers about their personal development and well-being, which could significantly influence their attitudes and behavioural intentions towards using AI. Thus, any policy must factor in personal well-being and development concerns as well.

Source: Guangming Cao, Yanqing Duan, John S. Edwards, Yogesh K. Dwivedi; ‘Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making’; Technovation, Volume 106, 2021*

*This paper proposes the three elements above in the context of academic research, but they could be just as relevant for any workplace.

Unlock your organisation's leadership potential

Empower your organization with our AI Powered leadership assessment and development platform.