HOW AI WILL CHANGE THE WORKPLACE
We asked some top thinkers from different fields to weigh in on what’s ahead, as the AI explosion compels businesses to rethink, well, almost everything
We asked some top thinkers from different fields to weigh in on what’s ahead, as the AI explosion compels businesses to rethink, well, almost everything
Artificial intelligence has been affecting how we work for some time—helping to craft job postings and evaluate applications, judging how efficiently we complete jobs and, for gig workers, determining assignments and pay.
But in the past year, and especially the past six months, generative AI has supercharged the potential of technology to help, hinder or reorient how we work. Visual tools like DALL·E 2 and Midjourney may drastically change graphic design. Large language-model text generation, beginning in earnest with the release of ChatGPT, promises to affect every activity that involves touching a keyboard.
To learn more about how the worlds of work and AI will interact, we spoke with experts in computer science, human resources, recruiting, corporate leadership, psychology and more. Here are some of their predictions.
AI will continue the current process of automating parts of workers’ jobs. But while today’s automation is often described as applying to dull, dirty and dangerous tasks such as moving parts in a factory or warehouse, generative AI brings a new dynamic: Primarily, it supports knowledge work by providing the ability to create first drafts of documents, emails, presentations, images, video, product designs, etc.
So, knowledge workers might spend more time editing than creating, particularly as generative AI is embedded into all the software products they use today. For instance, instead of an email system just typing ahead a few words, it could draft several paragraphs. Customer-relationship-management software could suggest topics to discuss with a sales prospect and even a script to follow. And a salesperson could describe a presentation in natural language, and draft slides could be created, accessing corporate data and images to fill out details.
But there are some GenAI applications where the potential to have transformative impact really comes to the fore: For example, in research and development, some experiments using generative AI to support writing software code have shown very high levels of increasing productivity. But that doesn’t mean we’ll need a lot fewer software engineers, because the world needs more software. Generative AI also has the potential for improving the productivity of contact centres. There already were technologies that could automate interactions with customers; generative AI has the potential for making these interactions feel much more natural.
—Michael Chui, partner, McKinsey Global Institute
Artificial intelligence is likely to flatten many organisations due to its ability to automate work activities. Right now, most organisations have entry-level people who perform routine tasks, midlevel individuals who supervise them and high-level employees who set the direction of the organisation.
That organisational structure will no longer be necessary. AI can automate many of the tasks performed by entry-level workers. Accounting features, purchase orders and job requisitions are already being automated, and workplaces no longer need people who manually compile or analyse information.
As generative AI becomes more widely deployed, even more tasks will be automated. In addition, job supervision and assessment won’t need as much human oversight. Customers can rate employees on how well they perform basic tasks and allow people to get the services they want. Using data analytics and AI, companies can use the responses to weed out low-performing workers and reward their top individuals. The end result will be fewer layers of management and a smaller number of employees overall in the organisation.
—Darrell West, senior fellow, governance studies, Brookings Institution
In the past, managers turned to software to judge workers on technical matters—counting keystrokes or time away from the screen. Now companies are using machines to judge how much empathy their employees show.
I was recently sent a “management tip of the day,” advice on how to prepare for a job interview conducted by an artificial intelligence—a process that is all too common these days. A good score required that I appear “natural” with the machine, defined as injecting “authenticity and humanity to the interview.” It seemed a through-the-looking-glass request. A machine would be judging me on qualities that only human beings can exhibit.
The AI judgments don’t end with interviews. It is increasingly common for corporations to use AI programs to monitor employee empathy on the job. For instance, in call centres, AI programs coach and score workers on an empathy scale to judge their performance with callers.
With the addition of ChatGPT to a full suite of office products, texts, emails and calls, there is no limit to the interactions that may be judged by pretend-empathy machines. They will pretend to understand jealousy, competition, depression and insecurity, all the messy human feelings that come up in the life of a firm.
When machines test us on how we respond to such human complexities, high scores may go to those who exhibit qualities that machines value most—consistency and a bias toward tidying up what seems messy. Those who don’t stack up may lose their jobs.
It is backward thinking: Technology redefines human empathy as what machines can understand. Having built the machines that will judge us, now we will train ourselves to please the machines.
—Sherry Turkle, author and Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, Massachusetts Institute of Technology
We are already seeing the rise of digital assistants that speak with a human voice and can use human appearance and social intelligence to negotiate disputes, brainstorm business strategies or conduct interviews. But our research illustrates that people may act less ethically when collaborating via AI.
Traditionally, teammates establish emotional bonds, show concern for each other’s goals and call out their colleagues for transgressions. But these social checks on ethical behaviour weaken when people interact indirectly through virtual assistants. Instead, interactions become more transactional and self-interested.
For instance, in typical face-to-face negotiations, most people follow norms of fairness and politeness. They feel guilt when taking advantage of their partner. But the dynamic changes when people use an AI to craft responses and strategies: In these situations, we found, people are more likely to instruct an AI assistant to use deception and emotional manipulation to extract unfair deals when negotiating on their behalf.
Understanding these ethics risks will become an active focus of business policy and AI research.
—Jonathan Gratch, professor of computer science, University of Southern California
AI will enable older workers to be seen, even by those with ageist eyes, as young again.
Younger adults tend to excel at work that uses fluid intelligence that involves analysis and solving discrete problems quickly. Older workers are thought to exhibit greater crystallised intelligence—the capacity to leverage experience and knowledge gained over years to quickly see patterns, nuance and emotional insights, and the capacity to determine which problems should be addressed and which are just noise.
AI is likely to provide a kind of augmented intelligence to older workers, enabling experienced professionals to fully leverage their talent and skills.
For example, AI will be an invaluable collaborator with physicians, speeding the collection and organisation of critical information such as patient symptom history, genetic profiles, medication interactions, as well as past successful treatment plans for similar conditions, etc. These systems will enable physicians of all ages to gather information quickly, but older doctors will be better equipped to apply their years of experience and knowledge to validate AI diagnoses and treatment recommendations.
AI will be more than a collaborative assistant to older workers. It will also be a valuable coach. The sheer growth and velocity of knowledge and technology is making training and upskilling essential. Unfortunately, many employers don’t invest in older-worker education. Now AI applications are being deployed in workplace education to address individual learning and knowledge gaps, helping older workers remain current and competitive.
—Joseph F. Coughlin, director of AgeLab, Massachusetts Institute of Technology
Today, most organisations suffer from a “digital dexterity gap,” where the workforce is largely unable to keep pace with fast-changing technology. Organisations have more technology than their employees are comfortable using, creating barriers to efficiency and productivity growth.
AI services strip away complexity. By using conversational interfaces and natural-language processing, AI removes the need for workers to master complex computer functions and menus. People simply describe what is needed, in nontechnical language, and refine their requests to get better output.
An employee, for example, could give an AI historical data and say, “Find and rank all the variables that will determine the market potential for this new product.” Before conversational interfaces were developed, getting the information would require a lengthy and complex series of interactions.
—Matt Cain, vice president and distinguished analyst, Gartner
As AI takes over routine tasks, there will be a temptation to cut the whole tier of entry-level employees: Summarising documents, answering routine emails, writing basic computer code and solving simple logistical challenges are all tasks that AIs can do about as well as an inexperienced human, and at much lower cost.
But employers still need an on ramp for new hires. If you stop hiring entry-level employees, you’ll have to do all your midlevel hiring from outside the organisation. And if every organization pares back on entry-level hires, it will get harder and harder to find experienced midcareer talent anywhere.
That’s why it pays to cultivate your own long-term talent pool by hiring green employees, but rethinking how they are tasked and trained. Instead of piling your juniors with grunt work and trusting that they’ll learn through observation, assign them more challenging tasks, like drafting reports instead of just summarising them—the explosion in AI research and writing tools means that kind of work is now well within the grasp of inexperienced hires. With more active coaching and mentoring, these green employees can grow into valuable colleagues, much more quickly.
—Alexandra Samuel, digital-workplace speaker and co-author of “Remote, Inc.”
So many jobs involve writing standard responses—thank-you notes to customers, responses to job applicants and unfortunately term papers—that AI is instantly and easily used in almost every white-collar role.
The concern isn’t that the responses produced aren’t original or creative. How creative does a performance appraisal need to be? It is that if ChatGPT writes the report, the “author” hasn’t thought about it, hasn’t weighed the arguments and then come to their own conclusions in the text. They cannot explain to someone why the report says what it does, but they now have to live with its conclusions.
What happens when the ChatGPT report fails to include proprietary information that you could have found if you searched yourself, and it changes the conclusions? How do we explain to a subordinate why the appraisal written by ChatGPT gave them a lower score compared with last year, even though their performance seemed to be the same? The temptation to use it without thinking through the arguments and explanations could lead to big mistakes.
—Peter Cappelli and Sonny Tambe, professor and associate professor, Wharton School of the University of Pennsylvania
Workers are already using ChatGPT to craft the perfect Facebook ad or tools like Descript to edit videos, but AI will get incorporated in more upstream work. AI will be in the boardroom, brainstorming sessions and planning meetings.
Imagine an AI system that runs global simulations and impact analyses for 5,000 different budget plans. Or an AI that proactively writes new code for you when it discovers that you have a bottleneck in your sales planning. Or a proxy AI trained on customer research that allows you to have simulated conversations with your target market. We’re moving from task-oriented AI to goal-oriented AI, and enterprises are looking to leverage it safely, securely and ethically.
—Allie K. Miller, AI entrepreneur, adviser and investor
Many asset-management companies are now offering hybrid advisory services—involving both human advisers and algorithms—to their clients. But these new services are unlikely to reduce the demand for human advisers.
Instead, automation is expanding the market for financial advisers by making it more cost effective to serve clients with lower levels of wealth. Human advisers can now cater to more clients, since certain tasks, such as addressing simple customer queries and constructing portfolios, can be automated. As a result, asset managers are now hiring more human advisers instead of laying them off.
In addition, the requirements for a financial adviser’s success are changing. As more portfolio management is turned over to algorithms, technical portfolio-allocation skills are becoming less significant. However, it is becoming more important for advisers to explain how algorithms operate and assist investors in navigating turbulent market conditions. Our research shows that human advisers are still essential for customer satisfaction and retention because of their ability to reduce clients’ discomfort from interacting with algorithms and reducing clients’ uncertainty regarding the algorithms’ performance.
—Alberto Rossi, finance professor and director of the AI, Analytics and Future of Work Initiative, Georgetown University
AI-powered “concierge” systems will reduce or eliminate the frustrating search for answers that many employees endure today when seeking services from their employer. These systems will help employees make the most of their benefits, stay compliant with policies or simply find out information about their colleagues, organisation structure or customers that can sometimes be difficult to unearth in large organisations. When do my health benefits renew? What is my current deductible? What is the policy for meal expenses in New York?
What’s more, in the hybrid work environment, AI-driven concierge tools will book conference rooms, optimise the location of colleagues in the office so they can better collaborate, and help office managers manage capacity and services.
—Joe Atkinson, U.S. chief products and technology officer, PwC
At its best, AI will drive better collaboration and productivity. It will help employees turn notes into documents and documents into presentations. Yet human judgment is key to unlocking AI’s power. Our data reveals that only around half of employees believe they know when to question the results of automation or AI—the other half don’t think they have that skill. But generative AI is already known to hallucinate—make up false facts—and employees who blindly follow its outputs risk failing.
So, companies must equip employees with the skills and inclinations needed to successfully use AI. Rather than acting on the AI’s meeting summary alone, employees must understand that talking to human colleagues who attended the meeting isn’t optional. They must also learn to proofread AI-produced text, confirming cited facts with outside sources. And governance structures must ensure that AI-produced content always includes a human in the loop before it is used.
—J.P. Gownder, vice president and principal analyst, Forrester
The emergence of AI tools and data analytics is transforming the way organisations discover, assess and select talent. If trained with the right data, AI models can also compare candidate profiles to a company’s most successful employees, identify professionals with a proven record and determine who is most likely to consider a job change.
For example, for certain roles, high performers’ profiles include a broad range of skills that are relevant to multiple roles, while for other functions, optimal skill sets are much more narrow and specific. Our data indicates that the comparison of a candidate’s skills to those of high performers produces the most predictive indicator of future success, particularly on contract jobs.
Also, AI models can be further enhanced by incorporating individual performance data for employees or contractors who have previous experience with an employer. There is a wealth of such data available to talent solutions firms that employ hundreds of thousands of contractors annually.
Ultimately, though, it is important to think of AI as a tool—not a substitute—for the human art of recruiting. Assessing and selecting talent requires insight about a candidate’s communication skills, attitude and determination level and what it takes to succeed.
—M. Keith Waddell, president and CEO, Robert Half
Early research suggests that while generative AI is likely to boost the productivity of all workers, it may benefit low-skilled workers more. A randomised field experiment by Microsoft reported that generative AI enabled a 55% decrease in average task-completion time for software developers, with the most benefit for older developers and those with less programming experience. Similarly, a study from MIT reports that ChatGPT’s use in professional writing raises average productivity and quality for low-ability workers more than their high-ability peers.
In an ongoing experimental study with M.B.A. students who were tasked with writing business reports, I found that ChatGPT’s availability not only increased productivity but also student satisfaction. More students expressed a desire to write when a tool like ChatGPT was available. In short, the impact of generative AI might not just be a general increase in productivity but also a narrowing of the productivity gap between low-skilled workers and high-skilled ones.
—Kartik Hosanagar, John C. Hower Professor, Wharton School of the University of Pennsylvania
It is the classic email scam: An employee receives a bogus note that appears to be from their manager, telling them to transfer funds to some account. For this to be convincing, the attacker needs to access the company’s computer systems to learn about the firm and the target, including their personal details.
AI makes this scamming much easier—and more dangerous.
By getting access to companies’ internal emails and nonpublic reports, hackers can use AI to generate very convincing messages. For example, the message might start with: “Fred, it was great to have dinner with you and your wife last Wednesday, we should do that again. Meanwhile, I need you to…”
Or how about a phone call or videoconference with your boss? Deep fakes make it possible to imitate the voice and even the image of your manager.
AI may also lead to smaller and smaller targets for scams. If it takes lots of manual labor to create customised spear-phishing emails, it is not worth it for hackers to cheat people out of $100. But if AI makes it trivial and cheap to create phoney emails, no one is too low on the totem pole to be ignored.
All this raises the level of skepticism that we must have substantially. Procedures will have to be put in place to validate the authenticity of who you are dealing with. In many cases, a phone call might be sufficient. A somewhat deeper approach might be a phone call to the boss’ administrative assistant, in addition to a boss—a bit like doing multifactor authentication on the computer. In extreme cases, a face-to-face meeting might be necessary.
—Stuart Madnick, professor of information technologies, MIT Sloan School of Management
AI helps organisations build for the future by automatically detecting employee, team and organisation wide skills—and identifying ways to address gaps before management is even aware of them.
For roles like nurses, software developers and marketers, necessary skills are constantly changing, and it can be tough for organisations to keep track of what is needed. Nurses, for instance, must become familiar with an ever-increasing number of tech platforms, as well as data analysis to help patient outcomes.
As these needs evolve, AI can help keep track of what skills organisations need and predict what they might need next. For instance, a business could use AI to scan job descriptions in its industry to look for trends. The AI might notice that lots of marketing jobs now require employees to understand new types of analytics—and your employees must understand them, too, or miss out on important strategic insights.
—Mahe Bayireddi, CEO and co-founder, Phenom
What a quarter-million dollars gets you in the western capital.
Alexandre de Betak and his wife are focusing on their most personal project yet.
Tech investor was one of the most outspoken supporters of Trump in Silicon Valley
President-elect Donald Trump named a Silicon Valley investor close to Elon Musk as the White House’s artificial intelligence and cryptocurrency policy chief, signaling the growing influence of tech leaders and loyalists in the new administration .
David Sacks , a former PayPal executive, will serve as the “White House A.I. & Crypto Czar,” Trump said on his social-media platform Truth Social.
“In this important role, David will guide policy for the Administration in Artificial Intelligence and Cryptocurrency, two areas critical to the future of American competitiveness,” he posted.
Musk and Vice President-elect JD Vance chimed in with congratulatory messages on X.
Sacks was one of the first vocal supporters of Trump in Silicon Valley, a region that typically leans Democratic. He hosted a fundraiser for Trump in San Francisco in June that raised more than $12 million for Trump’s campaign. Sacks often used his “All-In” podcast to broadcast his support for the Republican’s cause.
The fundraiser drew several cryptocurrency executives and tech investors. Some attendees were concerned that America could lose its competitiveness in emerging areas such as artificial intelligence because of overregulation.
Many tech leaders had hoped the next president would have a friendlier stance on cryptocurrencies, which had come under scrutiny during the Biden administration.
“What the crypto industry has been asking for more than anything else is a clear legal framework to operate under. If Trump wins, the industry will get this, and more innovation will happen in the U.S.,” Sacks posted on X in July.
The tech industry has also pressed for friendlier federal policies around AI and successfully lobbied to quash a California AI bill industry leaders said would kill innovation.
Sacks’ venture-capital firm, Craft Ventures, has invested in crypto and AI startups. Sacks himself has led investment rounds in many. He has previously invested in companies such as Slack, SpaceX, Uber and Facebook.
Sacks was the former chief operating officer of PayPal, whose founders included Musk and Peter Thiel . The group, called the “PayPal mafia,” has been front and center this election because of its financial muscle and influence in drumming up support for Trump.