Forget the dystopian biological engineering of ‘Planet of the Apes’ – Thales discovery mimics biology to improve machines, not nature
Since science fiction writers first coined the term “robot” nearly 100 years ago, humans have pondered the consequences of building machines that can think and learn like people. And despite a surfeit of books and films depicting a grim outcome for humans faced with this technology, real-world progress toward the development of machines with artificial intelligence (AI) has been benign – and is designed to help people and their data stay safe, secure and become more productive in the Digital Age.
Today, making the right decisions in the current data-laden environment – whether in consumer, enterprise or defense settings – can be daunting. We have unprecedented levels of information coming at us every day, reaching terabyte levels every day for government agencies and large organizations. That’s why technologies featuring artificial intelligence are becoming increasingly important. On the battlefield, AI informs systems defending friendly aircraft against advanced threats, and tech companies in the private sector are announcing consumer-focused advancements in AI and machine learning nearly every day.
Fusing AI advancements with the right security, communications, data and network solutions is our job at Thales, where customers across key industry sectors count on us to help them navigate the evolving business, security and operational environments created by this new age of technological disruption.
For these customers and others, Thales recently led a major breakthrough – the development of an artificial synapse capable of learning autonomously. In partnership with the University of Bordeaux and Evry, Thales researchers placed a synapse called a memristor directly on a computer chip. Like other AI technologies, it mimics the way the brain processes and stores information, but unlike typical AI neural network systems, it takes less time and energy to learn, and adjusts to different electrical impulses – just like synapses in the brain.
In an age when computer systems can process billions of bytes of information per second, why are scientists so focused on modeling the biology of the human brain? The answer: The brain is more efficient – it simply does a lot more with a lot less. Sure, the world’s fastest supercomputer can store 30 quadrillion bytes of information and process at speeds exceeding 8 billion megaflops. But to get the job done it requires 10 megawatts of energy, about the same amount of electricity required to light 10,000 homes.
By contrast, the human brain can store the equivalent of 3.5 quadrillion bytes of information and process at speeds of 2.2 billion megaflops, but does so with a lightbulb-level of energy, at just 20 watts.
Supercomputers are also enormous. A brain on the other hand, “fits nicely inside our head,” wrote Mark Fischetti of Scientific American, who also points out that “a cat’s brain smokes even the newest iPad – 1,000 times more data storage and a million times quicker to act on it.”
Thus the excitement over the artificial synapse – the more like a human brain our technology becomes, the more promising the potential applications. From self-driving cars to air traffic management, many of these applications are being envisioned by Thales.
30 QUADRILLION BYTES POWERED BY 10 MEGAWATTS OF ENERGY
(the same amount of energy consumed by a small town)
3.5 QUADRILLION BYTES POWERED BY 20 WATTS OF ENERGY
(the same amount of energy to power a light bulb)
Currently in development at Thales is an aviation-focused AI system that manages and synthesizes data from air traffic systems, weather forecasts and airports for a more complete flight picture.
“Currently, nobody has an overall, dynamic view of an aircraft’s flightpath,” said Thales Chairman and CEO Patrice Caine in a recent interview with Aviation Week. “When a weather event occurs in flight, the pilot is on his own to optimize the trajectory.”
Threat detection applications used in IT network security is another example of how AI capabilities may be applied. In large networks where every program cannot be scanned for malicious applications, an AI solution can help monitor data usage trends in specific categories to identify potential red flags.
While multiple machine-learning solutions might be deployed to keep up with today’s fast moving and complex threat environment, AI systems in the future will learn automatically from evolving threats and adapt instantaneously.
The key to this capability resides firmly in the artificial synapse, which researchers say has shown promise during simulations in which the technology “can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning in neural networks.”
What the research doesn’t say is where we fit into a world where “self-adaptive electronic architectures” function with the same processing power as a human brain.
But fear not. As we’ve written in this space before, we’re not predicting Skynet, the feared AI network that turns on its human masters in the Terminator franchise. Rather, AI holds the enormous potential to help people in charge of complex systems make better decisions – whether it’s about a flight path or to defend against a cyber threat. And most IT professionals and executives will agree that with the amount of data coming at us every day, we need all the help we can get.