Dear Fellow Scholars,
这里是Károly Zsolnai Fehér带来的两分钟论文
this is Two Minute Papers with Károly Zsolnai-Fehér.
This paper does not contain the usual fireworks
that you’re used to in Two Minute Papers,
but I feel that this is a very important story that needs to be told to everyone.
In computer science, we encounter many interesting problems,
like finding the shortest path between two given streets in a city,
or measuring the stability of a bridge.
Up until a few years ago, these were almost exclusively solved
by traditional, handcrafted techniques.
This means a class of techniques that were designed by hand,
by scientists and are often specific to the problem we have at hand.
Different problem, different algorithm.
And, fast forward to a few years ago,
we witnessed an amazing resurgence of neural networks and learning algorithms.
Many problems that were previously thought to be unsolvable,
crumbled quickly one after another.
Now it is clear that the age of AI is coming,
and clearly, there are possible applications of it
that we need to be very cautious with.
Since we design these traditional techniques by hand,
the failure cases are often known
because these algorithms are simple enough
that we can look under the hood and make reasonable assumptions.
This is not the case with deep neural networks.
We know that in some cases, neural networks are unreliable.
But it is remarkably hard to identify these failure cases.
For instance, earlier, we talked about this technique by the name pix2pix
where we could make a crude drawing of a cat
and it would translate it to a real image.
It worked spectacularly in many cases,
but twitter was also full of examples with really amusing failure cases.
Beyond the unreliability, we have a much bigger problem.
And that problem is adversarial examples.
In an earlier episode, we discussed an adversarial algorithm,
where in an amusing example,
they added a tiny bit of barely perceptible noise to this image,
to make the deep neural network misidentify a bus for an ostrich.
We can even train a new neural network
that is specifically tailored to break the one we have,
opening up the possibility of targeted attacks against it.
To alleviate this problem, it is always a good idea to make sure
that these neural networks are also trained on adversarial inputs as well.
But how do we know how many possible other adversarial examples exist
that we haven’t found yet?
The paper discusses a way of verifying important properties of neural networks.
For instance, it can measure the adversarial robustness of such a network,
and this is super useful,
because it gives us information whether there are possible forged inputs
that could break our learning systems.
The paper also contains a nice little experiment
with airborne collision avoidance systems.
The goal here is avoiding midair collisions
between commercial aircrafts while minimizing the number of alerts.
As a small-scale thought experiment,
we can train a neural network to replace an existing system,
but in this case, such a neural network would have to be verified.
And it is now finally a possibility.
但是 不出错 不意味着
Now, make no mistake, this does not mean
that there are any sort of aircraft safety systems deployed in the industry
that are relying on neural networks.
No no no, absolutely not.
This is a small-scale “what if” kind of experiment
that may prove to be a first step towards something really exciting.
This is one of those incredible papers that,
even without the usual visual fireworks,
makes me feel that I am a part of the future.
This is a step towards a future where we can prove
that a learning algorithm is guaranteed to work in mission critical systems.
I would also like to note that
even if this episode is not meant to go viral on the internet,
it is still an important story to be told.
Normally, creating videos like this would be a financial suicide,
but we’re not hurt by this at all
because we get stable support from you on Patreon.
And that’s what it is all about –
worrying less about views and spending more time talking about what’s really important.
Thanks for watching and for your generous support,
and I’ll see you next time!