BCTCS 2018 : Day 1, Is this really London?

Editor’s note: This is the first in a series of blog reports by my PhD student Gary Bennett who attended the annual British Colloquium for Theoretical Computer Science (BCTCS) 2018  at Royal Holloway University of London. Apparently, he really enjoyed himself!

The British Colloquium for Theoretical Computer Science (BCTCS) is an annual event for UK-based researchers in theoretical computer science. The conference provides PhD students the opportunity to present and discuss their research will other PhDs as well as established researchers from all over the country.

Royal Holloway hosted BCTCS this year. Royal Holloway was founded in 1849, as one of the first colleges dedicated to providing women with access to higher education, by the victorian entrepreneur and philanthropist Thomas Holloway following the inspiration of his wife Jane. The college was officially opened by Queen Victoria in 1886. The campus grounds are particularly beautiful with the founders building being a very popular filming location for both TV and film.

Royal Holloway University of London Founder’s Building

Deep neural networks were one of the big draw on the conference’s first day. Deep neural networks are unrivaled in the domain of image classification. However, they are particularly vulnerable to adversarial perturbations on their input, changing the value of a single pixel can cause a misclassification. With this technology being used in self driving cars, it is only fair to ask —

Are deep neural networks really safe?

To help put our minds at ease Marta Kwiatkowska of Oxford University (talk title: ) has developed a novel automated verification framework for neural networks that is able to provide some guarantees that an adversarial image will be found if it exists e.g. a self-driving car will be able to detect an object on the road that may cause a collision!

John E Hopcroft at BCTCS 2018. More pictures at http://bctcs18.cs.rhul.ac.uk/

The prominent computer scientist John E. Hopcroft was the London Mathematical Society’s (LMS) invited speaker (talk: ). John’s talk started with one of the recent major advancements in AI when in 2012 AlexNet won the ImageNet Challenge with a deep neural network that had a top-5 error of more than 10 percentage points better than the next runner up. However, we understand very little of why deep learning works. The questions being asked in deep learning are:

 Is the structure of the network more important than the training?

Can a network be trained much quicker than at present?

Do we even need a large training set?

After all when a child learns what an object is we do not need to teach them thousands of examples!

The first day of BCTCS has been fantastic! My turn tomorrow.


Author: amitabht

Lecturer, Loughborough University, England

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.