Krishnamurthy (Dj) Dvijotham

deepmind

Learn more about building an AI-first technology startup on the Air Street Capital blog and our monthly analytical newsletter, Your Guide to AI.

The Research and Applied AI Summit (RAAIS) is a community for entrepreneurs and researchers who accelerate the science and applications of AI technology. We’ve been running for 6 years now and have hosted over fifty entrepreneurs and academics who have built billion-dollar companies and published foundational papers that drive the AI field forward. 

In the lead up to our 6th annual event that will be broadcast live online on the 26th June 2020, we’re running a series of speaker profiles highlighting what you can expect to learn on the day!

Weaknesses of neural networks in safety-critical settings

Krishnamurthy (Dj) Dvijotham

As AI systems occupy a larger deployment footprint in the world around us, the lack of robustness of certain “black box” neural network approaches is an impediment and potentially even a liability. Indeed, AI deployments in environments where the cost of mistakes is high will likely demand provable guarantees for safety and robustness. 

One of the attack vectors that has gathered significant attention in the last 2-3 years is adversarial attacks. These are small changes to the input data that go unseen to a neural network and provoke it to output the wrong prediction with high confidence. Recall Justin Gilmer’s talk on this topic at RAAIS 2018. 

Robust and verifiable AI systems

While several approaches to defending against adversarial attacks have been offered, there remains a need for formal verification: A provable guarantee that neural networks are consistent with a specification for all possible inputs to the network. 

Krishnamurthy (Dj) Dvijotham is a senior research scientist at DeepMind who works on these problems. His research focus is on building robust and verifiable AI systems that can be trusted to behave reasonably even under adversarial circumstances. Recently, he has published work that presents a dual approach to verify and train deep networks, scalable verified training for provably robust image classification, safe exploration in continuous action spaces, and training verified learners with learned verifiers. His work has received several best paper awards at AI conferences (UAI 2018, CP 2016 , UAI 2014 and ECML 2008).

verification.png

To find all of Dj’s research head over here and his homepage here.

We’re excited to be hosting Dj at RAAIS 2020, welcome!