Now that platforms and their related services have become commonplace, all eyes are on artificial intelligence.
“Systems are scalable, and complex ones can be trained quickly,” says Wolfgang Wahlster of the German Research Center for Artificial Intelligence (DFKI). Nevertheless, there are some challenges.
According to a recent study by analysts at the consulting agency McKinsey, technology companies invested between US $20 billion and US $30 billion in artificial intelligence (AI) last year – three times more than in 2014. Ninety percent of those investments were for research and development and practical software applications, while 10% went toward AI acquisitions.
According to the experts, the initiatives that have sprung forth from those investments are having a positive effect on operating margins in many industries. Automobile manufacturers that commit to AI, for example, can expect to achieve a seven to eight percent margin, whereas those that don’t can be happy just to break even or make a bit of profit. AI is also proving worthwhile for financial services providers (12% margin versus a two percent margin) and the health industry (17% margin versus –1% margin), the analysts report.
From Smart Services to Artificial Intelligence
At the recent Digital Summit in Ludwigshafen, Germany, acatech president Henning Kagermann met up with DFKI head Wolfgang Wahlster to discuss the latest advances in AI.
At CeBIT 2017, Kagermann had just handed over the final report from the “Autonomous Systems” expert forum to German Chancellor Angela Merkel. For Kagermann, AI is a logical continuation of the activities that were carried out in the realm of smart services, which mainly focused on the use of platforms and their services.
Henning Kagermann and Wolfgang Wahlster talk at the Digital Summit about the progress AI and machine learning have made to date.
Wahlster: The breakthrough in machine learning came last year. The fact that software was capable of learning was nothing new, because it had already been able to do that for years. What was missing was the scalability. When you looked more closely, you realized that the big proof points were still in their infancies, “Mickey Mouse” systems not yet mature. We had developed systems that delivered great results in very specific, small but unique domains – “nerd systems,” to exaggerate somewhat.
Thanks to machine learning, which is based on huge data volumes, we are now able to scale our systems to tackle the really big issues. Not only can we now learn end to end and use machine learning, we can also plan activities in real time. And this activities planning together with neural learning has also enabled us to significantly improve language and image understanding. In addition, we now have enough training data at our disposal thanks to Big Data.
Last but not least, the breakthrough in high-performance graphics cards and their programming has enabled us to develop very powerful systems, so that today we can train complex systems within 20 to 30 minutes.
Kagermann: Amazon recently started selling groceries in Germany via its Amazon Fresh platform, and claims to check every single strawberry for freshness before shipping it to the customer. No one needs to worry about decaying produce anymore.
Wahlster: Image understanding (machine vision) has become so incredibly fast in the meantime that we’re moving more and more away from testing only random samples. Nowadays, we can check the quality for each individual instance. Pakistan, for example, has huge mango plantations. But some of the mangos contain threadworms, so they are all barred from import into the European Union (EU).
With the help of machine learning and infrared cameras, though, we’ve found a way to check every single fruit. Such a testing procedure might also even make the mangos much cheaper for us in the long run, should they ever be allowed into the EU.
Kagermann: Autonomous driving, which the DFKI is also researching, is a much-discussed topic in AI. Collecting training data is so easy, yet we don’t even know all of the scenarios that exist in traffic. You’d have to travel up and down the streets for years, filming every possible situation. Is that the reason DFKI now generates artificial data?
Wahlster: Yes, doing so is really important, because one of the major problems in artificial intelligence and machine learning right now is the lack of mass data for very dangerous traffic situations. Take for example when a deer jumps in front of your car at night – there are no YouTube videos that could teach the system how a car should react in this case. There simply isn’t enough mass data available for the car to learn from.
The idea is to create synthetic data and teach vehicles on that basis. I expect we’ll be able to make great strides with this approach in the next two to three years.
Kagermann: Autonomous driving will need to prove to us that it can make driving much safer. Have we come that far yet?
Wahlster: It is essential that these kinds of systems also contain an explanation component. Unfortunately, today’s machine learning methods are not yet sophisticated enough to get the systems to explain the decision to us. We can use the systems for “classification” tasks, such as in the case of the strawberries and mangos, or to distinguish between a pedestrian and a truck. More complex decisions aren’t in the cards yet.
And there is a further problem: Self-learning systems are not capable of deleting false or obsolete data once learned. It is extremely difficult for them to pull that information out of the neural network again. In humans, this works via “extinction learning.” People who’ve become used to a chronic pain caused by poor posture while walking, for example, can/have to retrain their neural network to walk properly again, through physiotherapy for instance. A machine’s neural network doesn’t do that. It will continue to “hobble.”
Kagermann: How can we ensure that a statistics-based self-learning system adheres to ethical rules?
Wahlster: Systems must stick to these rules, that’s the No. 1 priority. Compliance will have to be checked against the standard scenario catalog drawn up by the German National Ethics Council. If a car doesn’t follow these rules, it won’t be allowed on the road. TÜV (safety) inspections will be done to confirm a car’s ability to master abnormal situations. But one thing is already clear: It will be some years yet before self-learning in the car makes sense and is ethically tenable.
Top image via A. Schmitz, 2017; Henning Kagermann (left) and Wolfgang Wahlster