Last week, I wrote a blog post about how it’s possible to synthesize really robust adversarial inputs for neural networks. The response was great, and I got several requests to write a tutorial on the subject because what was already out there wasn’t all that accessible. This post, written in the form of an executable Jupyter notebook, is that tutorial!
Security/ML is a fairly new area of research, but I think it’s going to be pretty important in the next few years. There’s even a very timely Kaggle competition about this run by Google Brain. I hope that this blog post will help make this really neat area of research slightly more approachable/accessible! Also, the attacks don’t require that much compute power, so you should be able to run the code from the post on your laptop.