Editor’s note: This is the first in a series of blog reports by my PhD student Gary Bennett who attended the annual British Colloquium for Theoretical Computer Science (BCTCS) 2018 at Royal Holloway University of London. Apparently, he really enjoyed himself!
The British Colloquium for Theoretical Computer Science (BCTCS) is an annual event for UK-based researchers in theoretical computer science. The conference provides PhD students the opportunity to present and discuss their research will other PhDs as well as established researchers from all over the country.
Royal Holloway hosted BCTCS this year. Royal Holloway was founded in 1849, as one of the first colleges dedicated to providing women with access to higher education, by the victorian entrepreneur and philanthropist Thomas Holloway following the inspiration of his wife Jane. The college was officially opened by Queen Victoria in 1886. The campus grounds are particularly beautiful with the founders building being a very popular filming location for both TV and film.
Deep neural networks were one of the big draw on the conference’s first day. Deep neural networks are unrivaled in the domain of image classification. However, they are particularly vulnerable to adversarial perturbations on their input, changing the value of a single pixel can cause a misclassification. With this technology being used in self driving cars, it is only fair to ask —
Are deep neural networks really safe?
To help put our minds at ease Marta Kwiatkowska of Oxford University (talk title: Safety Verification for Deep Neural Networks ) has developed a novel automated verification framework for neural networks that is able to provide some guarantees that an adversarial image will be found if it exists e.g. a self-driving car will be able to detect an object on the road that may cause a collision!
The prominent computer scientist John E. Hopcroft was the London Mathematical Society’s (LMS) invited speaker (talk: Research in Deep Learning ). John’s talk started with one of the recent major advancements in AI when in 2012 AlexNet won the ImageNet Challenge with a deep neural network that had a top-5 error of more than 10 percentage points better than the next runner up. However, we understand very little of why deep learning works. The questions being asked in deep learning are:
Is the structure of the network more important than the training?
Can a network be trained much quicker than at present?
Do we even need a large training set?
After all when a child learns what an object is we do not need to teach them thousands of examples!
The first day of BCTCS has been fantastic! My turn tomorrow.