Neural Network Verification

 

Artificial neural networks have emerged as a highly effective form of software, and are being used for a variety of tasks - such as game playing and image recognition. There is now an increasing trend of incorporating these networks in systems that human lives depend on, such as controllers for autonomous vehicles. However, neural networks are black boxes - it is currently very difficult to reason about how they might react in previously-unseen situations. We are working on addressing this issue by developing techniques for the verification of neural networks: formally proving that a neural network will behave as expected, under any possible set of circumstances.

Verifying neural networks is a difficult problem, both in theory (it's NP-complete even for simple networks and properties), and also in practice. However, sometimes the structure of a neural network can be exploited in ways that greatly simplify the problem. We developed an algorithm, called Reluplex, which can efficiently verify networks that use a particular activation function, called a rectified linear unit (ReLU). Reluplex has successfully been applied to prove properties of networks implementing a controller for the ACAS Xu airborne collision avoidance system, which is currently under development by the US Federal Aviation Administration (for a high-level overview, have a look at this press release). Another interesting application of Reluplex is that it can be used to prove that a neural network is robust to adversarial inputs, which means that slight perturbations to its inputs cannot cause misclassification.

Research on neural network verification is still at an early phase, and there are many open questions on which we are currently working. One topic is how to verify properties of larger networks. Another is how to handle different kinds of networks, e.g. networks with different activation functions, and how to verify different kinds of properties on these networks. Yet another is how to produce proof artifacts that are externally checkable, so that we don't have to trust the verifier itself. And finally, we are looking for additional real-world problems where neural network verification techniques can be applied.