The idea of self-replication is as old as life on Earth – if not older. While most scientists agree that it’s an innate prerequisite for sustaining and propagating life, there is a philosophical battle on whether it can be adopted. For example, a biological virus is a self-replicating and adaptable living organism, but it operates as a highly complex, biochemical machine.
However, if you look at the machine-like aspects of viruses, you will soon see why they are an ideal model for self-replicating robots and devices. The DNA or RNA of a virus is equivalent to the software that runs a robot or device, and a 3D printer can synthesize and assemble parts like a cell does.
Following the master plan of nature, scientists at the University of Oslo have created self-replicating and learning robots. The robots do not look at all like a terminator, but are multi-limbed machines that learn to overcome environmental challenges. Moreover, the robots can also teach themselves how to use a 3D printer to create new parts.
Another similar idea is the self-replicating USB stick that can spread software faster than the Internet. The USB stick holds an operating system and any software that needs to run on it. Then, it copies its contents to another USB stick with just a few clicks. Of course, this form of replication doesn’t determine which files will be replicated, so it may not be a good idea to spread sensitive software this way.
Although this technology may sound fascinating to futurists, it does create a few concerns. For example, do we let our devices and robots run on their own? Or do we embed individual stops to tightly control what they do and what a given software or device learns?
Do we let devices and robots run on their own, or embed stops to control what they do and learn?
Self-Replication Out of Control
We know all what happens if replication runs out of control. In nature, viral infections can kill nearly an entire population. For example, the Ebola outbreak in West Africa in 2014 shows how devastating a fast-replicating virus can be, in this case killing 90 percent of infected people. Furthermore, the Spanish Flu from 1918 sickened up to 40 percent of the world’s population and killed an estimated 50 million people.
In the tech world, the most malicious computer virus so far is Stuxnet, which was designed to damage systems in Iran’s uranium enrichment facilities. While the virus had a specific target, its damaging capabilities were enormous because infrastructure – energy and water systems – impacts a broad population. The reported number of nearly 60 percent of computers in Iran infected looks very much like the numbers we know from an outbreak of a natural infection.
Considering that trillions of IoT devices are expected to be in use by 2050, our increasingly digital world must seem like paradise for these viruses – unless we do something now to protect our technology thoroughly.
Too Risky to Run Free
The risk of spreading an infection depends on the density of the system. When there are millions of similar connections, the risk rises accordingly. Inevitably, we will standardize most connections between IoT devices and their counterparts, which further increases the density of the network. In fact, estimates suggest that such systems will be approximately 12 times more prevalent in 2020 and 70 times more prevalent in 2030 than today. And with greater density comes greater risk.
Letting devices develop and run their own recombinant code would be way too risky and a kind of an open-field experiment. Given that our civilization now runs on software and IT (and probably even more so in the decades to come), our society would be threatened if devastating software viruses enter our systems and shut everything down – whether accidental or planned.
Here are three ideas that can help minimize the risk of self-replicating systems:
- Allow only partial replication. We may allow only a partial replication of the full code or body plan. This approach would enable evolutionary traits, but under narrow control. The code may include a “not to be touched” core system and an outer layer of free-changing code.
- Build in replication stops. Nature operates with telomeres at the end of each chromosome that shorten with each replication cycle and stop the process. Perhaps we can program devices and programs to embed such caps.
- Limit the supply. We may limit the energy or data-exchange supply available to confine the range of replication and transmission of capabilities.
The current speed of change around the IoT, robotics, and learning systems makes it likely that the devices we call “things” and “robots” will become more independent. If we do not decide how their evolution will run in this next round of experiments with software and 3D printing, we might want start thinking seriously about the implications.
For more insight on how new forms of human-machine interaction will change the structure of industry and society, view the Digitalist’s infographic Robots: Job Destroyers or Human Partners?
Kai Goerlich is the idea director of Thought Leadership at SAP.