At present, AI is launching a persistent infiltration into our personal lives with the rise of self-driving cars and intelligent personal assistants. In the enterprise, we likewise see AI rearing its head in adaptive marketing and cybersecurity. The rise of AI is exciting, but people often throw the term around in an attempt to win buzzword bingo, rather than to accurately reflect technological capabilities. In cybersecurity in particular, it’s all too easy to slip on marketing snake oil as vendors tout AI and machine learning, when — caveat emptor — not all AI technologies on the market are created equal.
Machines learn the same way humans do
The best way to discern fact from alternative fact is to understand what AI really is and how it works. I’ve found that the easiest way to explain artificial intelligence is to compare it to something we are all very familiar with — human intelligence. At its core, human intelligence has a simple information flow system model: input, processing, and output. Input takes place in the form of sensing or perceiving information (via your eyes, ears, nose, etc.). Processing occurs in the middle; this is where knowledge or memories are formed and retrieved, decisions and inferences are made, and learning occurs. Once the brain processes information, the result is some form of output, such as action or speech.
For example, you are driving down the road and come up to a stop sign. As you near the intersection, you perceive the stop sign, you hit the brake, and you come to a complete stop. In this case, the stop sign is the input and your action to stop the car is the output; everything in between is processing. You know how to respond to a stop sign because you’ve learned by study and practice that a stop sign requires you to stop.
Artificial intelligence is a collection of techniques that have analogs and similarities to human intelligence. In machines, technology that deals with input is often exemplified by visual or speech recognition, natural language processing, and the like. The output is the way that these machines interact with us or other machines (e.g., Siri’s speech generation and navigation systems). Processing or learning is in between.
Neural networks help machines learn on a deeper level
When machines learn, we quite simply call this “machine learning.” There are many algorithms for machine learning, but one buzzy technique today is deep learning, which itself is based on a set of algorithms known as neural networks. Inspired by human biology, neural networks are mathematical simulations of a collection of neurons (which, if you recall from high school biology, are integral to human intelligence). In the image below, the circular nodes represent artificial “neurons” and the lines represent connections from the output of one node to the input of another. The signals fire from left to right.
These networks learn by attempting to match datasets presented as input to desired outcomes, or output. This requires mathematical equations to compute the outputs and compare the simulated output to the desired outcome. The resulting differences produce adjustments to the connection strength, iteratively modifying the strength until the output is reasonably close to the desired outcome. Once that point has been reached, we say that the network has “learned.”
The neural network in the above image is very simple. Neural networks are usually significantly larger and more complex, containing thousands of nodes and hundreds of layers that can facilitate thousands of different calculations. These massive neural networks facilitate what is known as deep learning, which can do much more complex predictions.
Choose the right weapon for the battle
But deep learning is not a silver bullet. It’s important to pick both the right algorithm and the right data for the job at hand. Whenever I discuss this, a story comes to mind about an alleged experiment by the Pentagon in the 1980s. According to the popular anecdote, the Pentagon tried to identify camouflaged tanks using a neural network. With just one mainframe, researchers trained a relatively small neural network using 200 pictures — 100 tanks and 100 trees. The experiment achieved remarkable success in the lab (almost 100 percent accuracy!), but in the field, it failed miserably. Purportedly, the researchers had taken all of the tank photos on cloudy days and all of the tree pictures on sunny days. Consequentially, the neural network learned to identify sunniness, not tanks.
Of course, visual recognition via deep learning has now become a reality, but the parable teaches us that we must match the right data with the right principles. In cybersecurity, we see this pain point often. The majority of machine learning technologies in the cybersecurity market leverage a very specific learning method known as supervised machine learning. Supervised machine learning is similar to the way that humans learn by example. Analysts present a computer with a dataset that has labels that help to train it; a dataset with photos labeled “rhino” or “giraffe,” for example, will help a computer learn that rhinos have horns and giraffes have long necks.
A type of machine learning that is less common, but perhaps more effective, is known as unsupervised machine learning. This method is much like learning by observation, whereby a computer ingests data and distinguishes patterns on its own. The latter method is typically more appropriate for security use cases like insider threat detection, where datasets are limited and don’t have labels.
Ultimately, this conversation is not about which AI is better/worse or good/bad — it’s about understanding the different applications of AI and where they are most useful. In an ideal world, the most efficient AI applies the right method for the job, which sometimes may be a combination of machine learning methods. Supervised and unsupervised machine learning both have their applications. The key is to understand the differences and how and when to best apply each for the most effective results.
Stephan Jou is the CTO of Interset, a data analytics company.
by STEPHAN JOU