by Xavier Victor Kanu
Artificial intelligence will be the greatest jobs engine the world has ever seen, which transform our world sooner. As A.I. has grown smarter, more and more people are wondering, “Is my job safe?” Are you one among them who is looking for an answer to this question? Then you are on the right path! Technology has progressed nonstop for 250 years. And in the U.S. unemployment had stayed within a narrow band of 5 to 10% for almost all that time, even when radical new technologies such as steam power and electricity came on the scene. According to studies at Oxford University and the Oxford Martin School, 47% of jobs in the United States can be fully automated in the next 20 years.
The threat that automation will eliminate a broad swath of jobs across the world economy is now well established. As artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.
There is no shortage of angst when it comes to the impact of AI on jobs. For example, a survey by Pew Research Internet finds Americans are roughly twice as likely to express worry (72%) than enthusiasm (33%) about a future in which robots and computers are capable of doing many jobs that are currently done by humans.
However, at least one set of experts believes jobs will be shredded, but not eliminated. Instead of worrying about job losses, executives should be helping to reduce jobs in which AI and machine learning take over boring tasks, while humans spend more time with higher-level tasks.
That’s the word from Erik Brynjolfsson and Daniel Rock, with MIT, and Tom Mitchell of Carnegie Mellon University, who points out that the impact of machine learning, the self-programming, self-adjusting core of AI, on jobs. is iffy. “ML will affect very different parts of the workforce than earlier waves of automation,” they state in a recent paper. Instead, automation will occur on a task-by-task basis.
“Tasks within jobs typically show considerable variability in ‘suitability for machine learning’ while few — if any — jobs can be fully automated using machine learning,” they continue. “Machine learning technology can transform many jobs in the economy, but full automation will be less significant than the re-engineering of processes and the reorganization of tasks.”
What jobs are most likely to see tasks handled by AI or machine learning? Oddly enough, funeral directors rank high on the automatable list. Here are the roles Brynjolfsson and his co-authors identify as top candidates for machine learning:
• Mechanical drafters
•Morticians, undertakers, and funeral directors
• Credit authorizers
• Brokerage clerks
And the jobs least likely to be shredded by AI/machine learning:
• Massage therapists
• Animal scientists
• Public address system and other announcers
• Plasterers and stucco masons
Brynjolfsson and his colleagues say we’re having the wrong debate when it comes to AI: instead of pondering how jobs will be wiped out, people need to focus on “the redesign of jobs and re-engineering of business processes.” While AI and machine learning will be everywhere, the suitability for machine learning of work tasks varies greatly.” The high and low suitability-for-machine-learning tasks within a job can be separated and re-bundled.”
Dr. Irving Wladawsky-Berger, former IBM mover and shaker and now one of the most informed observers of the digital economy, provided perspective on the Brynjolfsson report, noting that some of the job activities “are more susceptible to automation, while others require judgment, social skills and other hard-to-automate human capabilities. But just because some of the activities in a job have been automated, does not imply that the whole job has disappeared.” To the contrary, he continues, “automating parts of a job will often increase the productivity and quality of workers by complementing their skills with machines and computers, as well as enabling them to focus on those aspects of the job that most need their attention.”
Bracing for job shredding calls for rethinking the various tasks for which employees assume responsibilities. Brynjolfsson and his co-authors warn, however, that arbitrarily bundling machine-learning-enabled and non-ML tasks into a single job is counterproductive. “Bundling suitability-for-machine-learning and non-SML tasks prevents specialization and locks up potential productivity gains.”
Ultimately, the key to success in this emerging environment is to be able to marshal and capitalize on AI capabilities to deliver more value and service to customers. Employees can play a vital role in identifying opportunities, training models and algorithms, and taking a leadership role in determining if the systems are delivering business value in an ethical way.
Jobs will be enriched and elevated by AI and machine learning, but the best jobs will be those created to employ AI that links customers to the services and products they need.
But here is what we’ve been overlooking: Many new jobs will also be created — jobs that look nothing like those that exist today.
In Accenture PLC’s global study of more than 1,000 large companies already using or testing AI and machine-learning systems, we identified the emergence of entire categories of new, uniquely human jobs. These roles are not replacing old ones. They are novel, requiring skills and training that have no precedents. (Accenture’s study, “How Companies Are Reimagining Business Processes With IT,” will be published this summer.)
More specifically, a new research carried out by Sloan review in MIT reveals three new categories of AI-driven business and technology jobs. Sloan labelled them trainers, explainers, and sustainers. Humans in these roles will complement the tasks performed by cognitive technology, ensuring that the work of machines is both effective and responsible — that it is fair, transparent, and auditable.
This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.
Customer service chatbots, for example, need to be trained to detect the complexities and subtleties of human communication. Yahoo Inc. is trying to teach its language processing system that people do not always literally mean what they say. Thus far, Yahoo engineers have developed an algorithm that can detect sarcasm on social media and websites with an accuracy of at least 80%.
Consider, then, the job of “empathy trainer” — individuals who will teach AI systems to show compassion. The New York-based startup Kemoko Inc., d/b/a Koko, which sprung from the MIT Media Lab, has developed a machine-learning system that can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth. Humans are now training the Koko algorithm to respond more empathetically to people who, for example, are frustrated that their luggage has been lost, that a product they’ve bought is defective, or that their cable service keeps going on the blink even after repeated attempts to fix it. The goal is for the system to be able to talk people through a problem or difficult situation using the appropriate amount of understanding, compassion, and maybe even humor. Whenever Koko responds inappropriately, a human trainer helps correct that action — and over time, the machine-learning algorithm gets better at determining the best response.
Without an empathy trainer, Alexa might respond to a user’s anxieties with canned, repetitive responses such as “I’m sorry to hear that” or “Sometimes talking to a friend can help.” With the right training, Alexa becomes much more helpful. The following is a verbatim transcription of how Alexa with Koko responds to a person who says he’s worried that he’ll fail an upcoming exam: “Exams are really stressful, but a little anxiety can help us succeed. It sharpens our minds. … It’s your body’s way to prepare itself for action. It’s actually a really good thing. I wonder if you could think of your nerves as your secret weapon. Easier said than done, I know. But I think you will do much better than you think.”
The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases. Many executives are uneasy with the “black box” nature of sophisticated machine-learning algorithms, especially when the systems they power recommend actions that go against the grain of conventional wisdom. Indeed, governments have already been considering regulations in this area. For example, the European Union’s new General Data Protection Regulation, which is slated to take effect in 2018, will effectively create a “right to explanation,” allowing consumers to question and fight any decision made purely on an algorithmic basis that affects them.
Companies that deploy advanced AI systems will need a cadre of employees who can explain the inner workings of complex algorithms to nontechnical professionals. For example, algorithm forensics analysts would be responsible for holding any algorithm accountable for its results. When a system makes a mistake or when its decisions lead to unintended negative consequences, the forensics analyst would be expected to conduct an “autopsy” on the event to understand the causes of that behavior, allowing it to be corrected. Certain types of algorithms, like decision trees, are relatively straightforward to explain. Others, like machine-learning bots are more complicated. Nevertheless, the forensics analyst needs to have the proper training and skills to perform detailed autopsies and explain their results.
Here, techniques like Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction, can be extremely useful. LIME doesn’t care about the actual AI algorithms used. In fact, it doesn’t need to know anything about the inner workings. To perform an autopsy of any result, it makes slight changes to the input variables and observes how they alter that decision. With that information, the algorithm forensics analyst can pinpoint the data that led to a particular result.
So, for instance, if an expert recruiting system has identified the best candidate for a research and development job, the analyst using LIME could identify the variables that led to that conclusion (such as education and deep expertise in a particular, narrow field) as well as the evidence against it (such as inexperience in working on collaborative teams). Using such techniques, the forensics analyst can explain why someone was hired or passed over for promotion. In other situations, the analyst can help demystify why an AI-driven manufacturing process was halted or why a marketing campaign targeted only a subset of consumers.
The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency. In our survey, we found that less than one-third of companies have a high degree of confidence in the fairness and auditability of their AI systems, and less than half have similar confidence in the safety of those systems.
Clearly, those statistics indicate fundamental issues that need to be resolved for the continued usage of AI technologies, and that’s where sustainers will play a crucial role.
One of the most important functions will be the ethics compliance manager. Individuals in this role will act as a kind of watchdog and ombudsman for upholding norms of human values and morals — intervening if, for example, an AI system for credit approval was discriminating against people in certain professions or specific geographic areas.
Other biases might be subtler — for example, a search algorithm that responds with images of only white women when someone queries “loving grandmother.” The ethics compliance manager could work with an algorithm forensics analyst to uncover the underlying reasons for such results and then implement the appropriate fixes.
In the future, AI may become more self-governing. Mark O. Riedl and Brent Harrison, researchers at the School of Interactive Computing at Georgia Institute of Technology, have developed an AI prototype named Quixote, which can learn about ethics by reading simple stories. According to Riedl and Harrison, the system is able to reverse engineer human values through stories about how humans interact with one another.
Quixote has learned, for instance, why stealing is not a good idea and that striving for efficiency is fine except when it conflicts with other important considerations. But even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.
The types of jobs we describe here are unprecedented and will be required at scale across industries. (For additional examples, see “Representative Roles Created by AI.”) This shift will put a huge amount of pressure on organizations’ training and development operations. It may also lead us to question many assumptions we have made about traditional educational requirements for professional roles.
Empathy trainers, for example, may not need a college degree. Individuals with a high school education and who are inherently empathetic (a characteristic that’s measurable) could be taught the necessary skills in an in-house training program. In fact, the effect of many of these new positions may be the rise of a “no-collar” workforce that slowly replaces traditional blue-collar jobs in manufacturing and other professions. But where and how these workers will be trained remain open questions. In our view, the answers need to begin with an organization’s own learning and development operations.
On the other hand, a number of new jobs — ethics compliance manager, for example — are likely to require advanced degrees and highly specialized skill sets. So, just as organizations must address the need to train one part of the workforce for emerging no-collar roles, they must reimagine their human resources processes to better attract, train, and retain highly educated professionals whose talents will be in very high demand. As with so many technology transformations, the challenges are often more human than technical.
• H. James Wilson is managing director of IT and business research at Accenture Research.
• Paul R. Daugherty is Accenture’s chief technology and innovation officer.
• Nicola Morini-Bianzino is global lead of artificial intelligence at Accenture
• Joe McKendrick is an author and contributor for Forbes