A postdoc position to work with me on an EPSRC research project at Loughborough University is available from July 2017.

I have a position for a 1 year (in the first instance) postdoctoral research associate to work with me at Loughborough University. The position, supported by the EPSRC first grant COSHER (Compact Self-Healing Routing), comes with a good salary (in the UK system) and other perks and trainings. The project is available here at the RCUK gateway. The related research question is described in my earlier post here.

The earliest (and expected) start date is July 1st, 2017, but a later start date may be possible. The formal advertisement will be out in the coming week but please get in touch with me for more details!

Author Cathy O’Neil (The mathbabe!) in her book Weapons of Math Destruction (What a name!) states that we should remember that predictive models and algorithms are really just “opinions embedded in math.” Well said, indeed. After all, maths is annother language – a very powerful one for emotionless, ‘logical’ calculus though.

Maybe time for all to take the Modeler’s Hippocratic Oath (Derman and Wilmott):

–I will remember that I didn’t make the world, and it doesn’t satisfy my equations.

∼ Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.

∼ I will never sacrifice reality for elegance without explaining why I have done so.

∼ Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

∼ I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.

Thanks to the Newton Fund and the Mexican Academy of Science (AMS) I get to spend six weeks in Mexico city visiting my colleague at UNAM, researching, what else but Compact Self-Healing Routing… and authentic Mexican food 😉

The first part of this post is by way of thanks to the Newton Fund , the UK academies (say, The Royal Society) and the Mexican Academy of Sciences (AMC) for a Newton mobility grant which will allow me to visit my colleague Armando Castaneda at UNAM for six weeks in August and September this year. The call funds upto three months of ‘foreign activity’ so I could have possibly asked for more time but I was unsure of being able to get away for so long. Armando managed to visit me at Belfast some time ago so this can be even thought of as a return visit! He even managed to time his visit to coincide with the Belfast Marathon (while he’s training for a marathon) – `curiosier and curioser’, as Alice would say!

So, what’s this resilient compact routing? About three years ago, I was fortunate to be an I-CORE postdoc with Danny Dolev and Armando was a postdoc at Technion. We started discussing ideas around routing and possibly due to my long line of work with self-healing algorithms (on which many blog posts may follow!) we started to gravitate towards the question: Can we route messages despite failures in the network? At about the same time, Shiri Chechik bested our Leader Election paper (On the complexity of universal leader election) with her paper Compact routing schemes with improved stretch for the best paper award at PODC 2013. With many life-changing events in-between (such as getting faculty positions and moving to many degrees drop in average temperature and many degrees more of precipitation!), the first paper in this line has just managed to struggle over in early 2016. Compact Routing messages in self-healing trees (Arxiv) was a finalist for the best paper award in ICDCN 2016.

So, what’s this resilient compact routing (take 2)? Routing is a very important `primitive’ for networks – the ability of the network to take a message from a source node and deliver it to a target. We encounter it every time we get onto a network- as soon as we connect to a router, to a website, send an email, make a skype/voip call etc.. In practice, the most used protocols for routing are based on well known standard graph distance finding algorithms such as Djikstra’s and Bellman-Ford, which itself is a testament to the longevity of these algorithms and to the power of graph algorithms, in general. If a node x gets a packet which started at node a and needs to end at node b, node x will refer to it’s routing table – a table which tells it which of its neighbours to send its packet to.

Often, a routing table will contain an entry for every node in the network telling where to forward a message addressed to that node. Now, this means the table can be really really huge depending on the size of the network. In practice, there are ways around this. One way, which makes for some nice theory is to do some preprocessing on the network e.g. build spanning trees, do DFS traversal, maybe some renaming and port changes, to reduce the size of the routing tables and the packet header. The crux of many of these schemes seems to be (the now seemingly simple) idea of interval routing (which by itself may not be compact)introduced by Santaro and Khatib in 1982. The idea may be summarised as follows:

Starting from a particular node, do a Depth First Search (DFS) traversal and construct the corresponding DFS tree. Also, give each node a label that is that node’s DFS number (ID) (say, the time step at which that node was first encountered in the DFS traversal).

Now, at each node, if you store the ‘largest’ ID of its subtree and the IDs of its children, you can now get intervals – which tell you which neighbour in the tree a node should send a packet addressed to another node. Hence, the name Interval Routing.

This spawned an active and productive field of research for the last few decades particularly in the effort of reducing both the size of the labels and tables while reducing the necessary tradeoff to be paid in terms of the distance. It is well known that if you reduce the space you use for routing, you cannot use the shortest paths and must pay in some measure by using a longer path (this measure is called stretch).

Looking at the above description, one will notice that developing these data structures seem to rely heavily on the initial DFS traversal. This implies that one may need to do a lot of recomputation if the network changes. In this spirit, there are some (but not many) works on `resilient’ compact routing i.e. compact routing that can handle changes to the network. Ours is one such attempt – in which we show how to do the well know Thorup-Zwick routing (over trees for now) in the self-healing model. In brief, the self-healing model is a responsive model of resilience where an adversary chooses a node/processor to take down or even insert (presumably to do the most damage in either case) and the neighbours of the attacked node/processor react by adding some edges/connections to the graph/network. We show how to both the compact routing and self-healing in low-memory (O(log^n) at most, where n is the number of nodes in the network).

I will defer the technical details of both self-healing and our compact self-healing routing to another post but needless to say, I am quite excited about continuing this work!

The 32nd British Colloquium of Theoretical Computer Science (BCTCS 2016) was held in Belfast from March 22nd to 24th at Belfast (QUB). Here are some insights, pictures and links to some talks.

The 32nd British Colloquium of Theoretical Computer Science (BCTCS 2016) was held in Belfast from March 22nd to 24th at the pictueresque setting of Queen’s University Belfast and the newly remodelled Graduate School. It was a good amount of work – a bit like a 200 mts race where you accelerate rapidly and take a while to stop! This was more so because my colleague Alan Stewart who brought the conference here promptly retired leaving me in-charge! (Though he still did much of the work even post-retirement 🙂

I did my PhD in the USA in the area of algorithms and I discover that the areas of focus in theoretical Computer Science differ markedly in the US and UK. The UK has traditional strength in Languages and Logic whereas in the US there seems to be more strength in Algorithms based theory (this is something that the EPSRC readily admit!). BCTCS had good representation across the themes particularly since some of our speakers were from across the pond(s) (including Iceland!).

Slides from some of the talks are available at http://www.amitabhtrehan.net/bctcs.html

What are the distributed algorithms behind cell communication? Stuck in a sandpit, I and colleagues at QUB gather up some ideas which will hopefully also find some applications.

Ever been in a sandpit? When I was asked to be in one (called the Applied Maths Sandpit at our new EPS faculty.) I was not sure what it would be. It could be an innocent fun activity in the sand as in the first picture, an unlikely dream holiday at a golf course (as in the second picture), or, most likely, a grueling day which I was not forewarned of (third picture). As it turns out, it was rather nice and, ultimately, useful. There was no sand around, of course! (and neither was there much warm Sun, unsurprisingly for here).

A U.S. Soldier roughing it out in a sandpit! [http://www.defense.gov/Media/Photo-Gallery?igphoto=2001253076]

For a start, I met a few brilliant colleagues I did not know existed. Then, in a day of real intense cut-throat Dragon’s den like competition, we pitched our, mostly half-baked, ideas (btw, I am joking about the ‘cut-throat’, after all, we were (applied) mathematicians, not MBA enterpreneurs at the gathering!). Amazingly, at the end of it, our emergently formed motley crew of me, Fred Currell, Thomas Huettemann, Dermot Green and Alessandro Ferraro (All except me from QUB Maths and Physics) have been given resources to recruit and spoil a PhD student for a proposal that makes up the ideas of this post.

So, here comes a very high level, sparse, ambitious, and rough sketch:

Living organisms can be thought of as clusters of cells in communication (typically at gap-junction interfaces). Within interacting communities new cells are born and old ones are removed, through (sometimes programmed) death. There is a strong environmental influence on these processes. On a much smaller scale, quantum-mechanical processes are at work within cells, complicating the picture further. We think real life processes are efficient, fault-tolerant, self-healing and scalable, leading us to hypothise that there must be powerful distributed algorithms somewhere in these networks of cells waiting to be discovered.

Networks are often modelled as a graphs: the cells are nodes and a common surface between two cells facilitates communication. In biological systems, things change and this dynamism in networks is often addressed by failure models, including adversarial and accidental (random) death. The network can react to these changes in various ways and we seek a mathematical framework in which to formulate and analyse the various questions arising.

The team thought of three systems which could be of interest (some of the team already work on these though I know little about them at the moment):

Volvocine green algae — These capture the evolutionary emergence of multicellularity, including what is believed to be the simplest multi-cellular eukaryotic organism: Tetrabaena socialis.

Light harvesting in photosynthetic organisms — the mechanism whereby living organisms harvest energy from light is believed to be one of the clearest biological systems countering the view that life is too “warm and wet” for quantum phenomena to be relevant.

A quantum machine for efficient light-energy harvesting (from the paper)

The tumour spheroid — a simple multicellular mimic of a tumour, this system is amenable to direct laboratory study and is known to show many of the hallmarks of cancer, including the lack of growth regulation mechanisms, meaning it seeks to grow avidly.

A study showing effects on size, shape and growth rate of tumours (from the paper)

Hunting for the ultimate questions and answers a la ‘hitchhiker’s guide to the galaxy’, and some rather ‘useful’ algorithms!

Might as well begin with the answer to the ‘Ultimate Question of Life, the Universe, and Everything’. I would have titled my blog ‘The Algorithm’ if I knew how to reach the answer! Of course, the answer is deduced by the supercomputer, Deep Thought, afterr 7½ million years of computation. Deep Thought then points out that the answer seems meaningless because the beings who instructed it never actually knew what the Question was. So, when asked to produce The Ultimate Question, Deep Thought builds an even more powerful computer (the planet Earth) to do so, which is supposed to run for 10 million years to find the question and so on…(and you know the rest). Of course, independently, the answer is also discovered to be 9 times 6 (equals 42)!

So, to begin with, here’s an algorithm ….

Question: Are you aware of other such fundamentally useful algorithms?