Soccer team in a huddle

An AI Shares My Office

Feature Article | January 24, 2017 by Dinesh Sharma, Sam Yen, Markus Noga, Erik Marcade, Chandran Saravana, Danielle Beurteaux

Knowledge workers should expect artificial intelligence to be a colleague rather than a replacement. But AI could wreak havoc with organizations’ structure and decision making unless they are ready to adapt.

Blame House of Cards. The Netflix-produced hit was the result of an algorithm coupled with Netflix’s large collection of data on what viewers like to watch. Taking that as inspiration, ad agency McCann Erickson recently added the world’s first artificial intelligence (AI)-based creative director to its team in Japan. The memorably named AI-CD β will use data on award-winning commercials to produce ideas for new campaigns.

The company isn’t the first to let the math do the thinking: In 2014, Hong Kong–based venture capital firm Deep Knowledge Ventures announced a new addition to its board of directors, VITAL, which uses data to vote on potential investments. More recently, Finnish tech company Tieto welcomed Alicia T., an AI complete with a conversational interface, to the board (a win for board diversity?) of its new data-driven business services unit.

This doesn’t mean AI-based technology will become your overlords or replacements. Instead, they will very likely be colleagues. (Just don’t expect them to pay for lunch.)

For all the fears of AI taking all our jobs, the reality will likely be less dramatic. Outside of repetitive work that requires no independent thought, AI-human collaboration, rather than outright replacement, is the future of work. Jobs will be shared, with some tasks delegated to AI and monitored, to some extent, by humans. According to McKinsey & Co., 60% of jobs could see 30% or higher automation of some tasks.

But there will need to be big technological and cultural shifts to make AI as common as your laptop. The corporate structure as we know it could disappear as conventional hierarchies are replaced by new models with different emphases and values. Meanwhile, the skills that executives and employees need to bring to an enterprise will change. Creativity and problem solving will become the highest-valued human abilities.

Better start preparing your organization now.

Meet Your New Colleague

When Oxford Martin School and Citi GPS released the 2015 report Technology at Work, one particular number garnered a lot of attention—and more than a bit of panic. According to the report’s authors, about 47% of U.S. jobs were at high risk from computerization (19% were at medium risk, and 33% at low risk).

McKinsey & Co. reports a slightly different take. Yes, automation is coming, but it might not be for your entire job, just part of it (hopefully, the part you don’t like to do). McKinsey thinks that looking at specific tasks is a much better predictor of automation. Highly repetitive work, such as data processing, is very likely to be automated, as are some management tasks, such as monitoring and measuring certain outputs and performance. But other, more relationship-centric levels of managing, decision making, and planning won’t be going to the machines anytime soon. In fact, the number of jobs that will be entirely replaced could be very low. (The bigger incidence of near-future automation is likely to happen in developing economies where there is still a large dependence on human labor.)

Augmentation isn’t a new philosophy. Engineer and inventor Douglas Engelbert—who created the first computer mouse in 1964—was one of the early tech pioneers who believed that technology would work alongside humans.

His idea was that the true purpose of technology was to augment and amplify human capabilities, not completely replace human labor.

Indeed, most jobs have both repetitive and consistent tasks and intellectual and decision-making elements, says Sam Ransbotham, associate professor of information systems at the Carroll School of Management at Boston College. The question, he says, is how things will be divvied up between the humans and the machines. Just don’t expect it to be an all-or-nothing landscape.

“Much of the framing historically of automation has been around this dichotomy,” Ransbotham says. “That’s not really a productive way of thinking of this particular change. We’re not going to stop changes from happening, but it’s different when we’re talking about augmenting more knowledge-based work.”

AI as Decision Collaborator

Thomas H. Davenport, President’s Distinguished Professor of information technology and management at Babson College and co-author (with Julia Kirby) of Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, thinks that delineation of automation in knowledge-based work will operate at the decision-making level. Highly repetitive decisions that are backed up with significant data will be relegated to AI. But even then a human role remains.

“Humans will need to check on the outcomes and see if the models and the algorithms and the rules are performing as intended, and then intervene if they don’t,” Davenport says. “I think that humans are more likely to be the integrators, the final decision makers, who sort of assemble the different opinions from different machines.”

Even in a scenario such as Wall Street trading, where decisions are largely left to machines, it’s still important for checks and balances to be put in place, says Davenport. “It’s not easy to  intervene after a decision has been automated,” he says. “We have a tendency to let it ride, maybe often beyond the time when we should.”

Labor will be divided along the lines of what AI can and can’t do; creativity, abstract concepts, and many other human qualities are not AI capabilities, for example. It’s still a challenge to create AI that can truly replicate the kinds of human abilities we take for granted, and many of the decisions that managers and executives make on a daily basis don’t easily fit into the AI paradigm. The complexity of human relationships, minds, and cultures are currently beyond AI’s grasp, and this extends into the workplace. The algorithmically defined “right” decision might not necessarily be the best political and social decision. The role of AI in these instances could be to clarify and winnow options and to help identify opportunities.

Organizations will need to parse decision making not just in terms of its value but also according to its uniquely human elements. Much of a typical workday for many involves mechanical tasks that are often mistakenly thought of as creative and unique. They could potentially be farmed out to an AI bot (think of never needing to write a generic e-mail ever again, and rejoice). With those time-consuming tasks out of the way, there would be more time to focus on truly valuable pursuits, like determining on an empathetic level what customers really want and tapping into their aspirations.

How AI Will Change the Structure of Organizations

The changes that AI collaboration engenders could go beyond the task or employee level to challenge the entire way we think about how companies are organized. Most companies are currently organized like machines, with discrete elements, each focused on its own purview.

But AI could usher in a new style of corporate organization. Companies will become more connected and less siloed and hierarchical; they’ll operate in a more organic, flexible manner.

These connected companies will be able to synthesize and distill inputs quickly and constantly. Instead of defining work by a fixed department or business unit, they will define it by projects and purpose.

This could change the manager–employee relationship as well. The dynamic nature of a connected organizational structure places less emphasis on seniority and more on ideas, so a junior employee’s good idea is more likely to be given weight. The power of the HiPPO—highest-paid person’s opinion—could be on the wane.

AI Designed for Humans

Good design creates a symbiotic relationship with technology. If the purpose of AI technology is to amplify human capabilities, then it must be crafted to have a giving personality to create an optimal experience for humans, with minimal frustration and maximum efficiency.

For example, AI programs could be designed to offer suggestions based on context, similar to those that Amazon provides to consumers as they shop on its site. AI could suggest apps based on what people in similar positions use, recommend collaborations and networks, research sources, and even intervene to help human colleagues when appropriate.


Will You Step Up?

Roles will change as AI becomes a ubiquitous presence in the workplace. Where will your employees fit in?

In Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, co-authors Thomas H. Davenport and Julia Kirby delineate five categories of work roles that will evolve as automation enters the organization:

  • Step up: Oversee automation within an organization and ensure it’s a good fit with the business and the larger world.
  • Step aside: Leverage creative, innovative human skills and emotions.
  • Step in: Keep AI on track by supervising its processes and results and making necessary adjustments.
  • Step narrowly: Perform highly specialized work that wouldn’t be economical to automate.
  • Step forward: Create the next-generation AI technology.

Stepping in, says Davenport, is the role with a high level of interaction with a machine, “almost as a colleague.” Many employees will likely transition to that role, but there will also be new roles that involve monitoring and improving AI performance. For example, stepping up is a managerial role that encompasses high-level decisions and resource management, akin to a portfolio manager. “As we have fewer people in organizations doing the day-to-day work, I think it will certainly be a more important part of the managerial role than it is today,” he says.


What we’re looking at is a fundamental shift in mindset, says James Cham, partner at San Francisco–based venture capital firm Bloomberg Beta. Traditional software has been focused on precision and efficiency, he says, whereas AI is predictive. “Records of predictions, which capture what I’m thinking, calculate whether those predictions are right or wrong, and help to inform those predictions, will actually be of much higher value,” he says.

The amount of time we spend each day looking for answers will be minimized because AI will have the information. Asking the right questions, however, will become much more important. The executive of the near future will be trained to think creatively, a shift from the traditional emphasis on procedural knowledge to one of unstructured problem solving with AI as a patient, indefatigable research assistant.

In a bid to supply the next crop of executives with the tools to work with AI, some of the top business schools, including Harvard Business School and MIT’s Sloan School of Management, have recently begun offering courses on AI collaboration as part of their MBA programs. The goal is not only to educate students on AI in general but also to teach them how to use it as a decision-making tool. One class will include how AI can be used to create optimal teams.

The strategic advantage, says Cham, will come to those who best understand the AI models they create. They will be the companies that experience the fastest growth and increased effectiveness. “Executives should have an inventory of not just every app that they have but of every single model they deploy,” he says. “They should know what sort of return they get from that model, if they want to continue investing in it or want it to get better. When do they trust the model to make decisions by itself, or when do they say, ‘We need to have oversight on it?’”

Welcomed with Open Arms?

For all the promise of working with AI colleagues, there is, and will likely remain, some resistance to its implementation. Understanding how AI arrives at conclusions can help users feel more comfortable with their new collaborators. “Transparency of a decision’s logic is really critical,” says Babson College’s Davenport. “But with a lot of these relatively new technologies on deep learning and so on, there’s basically no transparency.”

Transparency should be part of an AI system’s design. A system can, for example, be programmed to offer a rationale for its thinking that allows a human to dig down through layers of information. This might be delivered through a report or even a voice system. So if an employee asks an AI system why it did what it did, the system will answer. Machine-learning models should not be black boxes; instead, they should be able to explain the confidence rate in the results, the error rates, and why the model is predicting or proposing certain things, so that people can follow up and double-check.

How AI-enabled tools are integrated into employees’ work is also important. It all comes down to trust. New AI colleagues should be introduced just as new human team members would be. Start by giving them small, easy tasks and then gradually give them bigger, more important jobs. That’s how trust is gained in the workplace, and it shouldn’t be any different for AI collaboration tools.

Though some organizations may decide to allow AI to operate independently, building in the flexibility so that users can adjust or override their AI tools helps maintain a sense of purpose and control. It also helps if organizations explain employees’ future options and give them a sense of what new skills they may need. “It’s going to take some time, I think, to prepare, so you need to tell people about it,” says Davenport.

But generational changes might make AI acceptance easier in a more organic fashion. Millennials will soon be a significant force in the workplace, and they have grown up with technology. Their acceptance of it is stronger, and they take its presence in the workplace for granted. Using AI tools will be no different for them.

Determine the Best Use of AI

But before any of these big cultural shifts happen, enterprises must think about how they will deploy this technology. Many still think of AI only in terms of labor replacement or as a magical cure-all for business problems. “I think, in general, that kind of coarse-grained thinking is not helpful, and it is actually not accurate,” Cham says.

Feasibility is another important factor in implementing AI in the workplace. Just because automation is possible does not mean it is practical or cost effective. In some cases, human labor will remain less expensive and more effective for at least the foreseeable future.

Indeed, Cham thinks that a lot of money will be wasted on misguided AI investments. “Even after you solve the technical problems, we have a bigger problem, which is we don’t have good economic frameworks for determining when AI makes sense,” he says. “We need better intuitions around where can you get a better bang for the buck. What does AI complement, and what does it replace? I think that sort of thinking is what we really, really need now,” he says.

The right way to think about AI systems is to focus on their incredible value in reducing the cost of prediction, he says. “But also, critically, where can we expect the results to get better and faster over time?”

The impact of AI on the workplace is going to be enormous. We’re just starting to experience the current real-world applications of AI and understanding how they’ll develop in the future. Now is the time to begin formulating a plan for AI’s implementation in the workplace and to prepare and train employees for what’s ahead. Think of how to redesign processes to create the best combination of human and AI abilities, says Cham. “It’s a question of what you focus on and what kind of teams and infrastructure you build to actually make more effective decisions and run better processes.”

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Dinesh Sharma is entrepreneur-in-residence at SAP.io.
Sam Yen is Chief Design Officer and Managing Director at SAP.
Markus Noga is Vice President of Machine Learning Incubation at SAP.
Erik Marcade is Vice President of Advanced Analytics, Products and Innovation, at SAP.
Chandran Saravana is Senior Director, Advanced Analytics, at SAP.
Danielle Beurteaux writes about technology and business.

This story originally appeared on The Digitalist.

Tags: , , ,

Leave a Reply