Soccer team in a huddle

Perception versus Reality: Conversational Computing

Feature Article | June 28, 2017 by Dan Wellers, Fawn Fitter

1. Perception: It’s about consumers interacting with devices.

Reality: What makes computing conversational isn’t the mode of input but how you engage an application and perform a transaction. Instead of clicking on a menu of choices or speaking predefined commands, you can type or talk as if you were having a normal conversation with another human. Although most people today associate conversational computing with getting consumer-oriented information, like asking their phones for directions or interacting with customer service, that’s beginning to change. More complex business-use cases show real promise.

At one co-working facility in Palo Alto, California, customers can already reserve workspaces with a casually worded text message to a chatbot that texts back to ask for any further information—such as the number of guests—it needs to complete the reservation. In the future, an HR-scheduling app could tell employees how much vacation time they have remaining, help them navigate schedule conflicts to find the most convenient dates, and automatically approve their requests for time off.

2. Perception: Conversations are all pre-programmed.

Reality: Today when we converse with Siri, Alexa, and OK Google, they misinterpret us all the time, sending us preprogrammed responses that feel as disjointed, inappropriate, and awkward as a bad date. As machines get better at understanding how we talk, though, they’ll get smarter; rather than forcing us to learn how to use them, they’ll adapt themselves to how we think, converse, and work.

Conversational systems are already capable of learning from past queries and new data in order to respond more usefully to future queries. Natural language processing technology helps the digital assistants in your devices understand casual speech or text. The digital assistants are then able to interact with other systems, like a search engine, calendar, or business intelligence application, to parse open-ended questions and deliver an answer or take action the way a human would. The more a conversational application is used, the better it is able to further refine its responses. It’s continually learning from transaction data and user behavior just as a human learns a new language.

3. Perception: This is just a new interface for existing systems.

Reality: More than a new interface, conversational computing is poised to completely change the way we interact with the computers that run our businesses. We’ll be able to configure devices, navigate virtual reality, interface with operational systems, and more without having to launch distinct applications. Instead, we’ll speak, gesture, or maybe just make a frustrated noise to indicate we need something, and the smart systems behind the scenes will respond accordingly. Our computers will apply machine learning to determine what we want, ask questions to clarify and add context, decide how to execute our requests, and deliver the results.

For example, we might say we need supplies, and our procurement system could hear the request, correlate it to past purchases, find an appropriate purchase order, and automatically place an order with an approved vendor who can deliver in the right time frame at the right price. Conversational systems may even come to initiate contact instead of waiting for requests: a maintenance system could proactively text a technician with the scheduled delivery date and installation instructions for a replacement part.

This story originally appeared on the Digitalist.

Tags: ,

Leave a Reply