We seek candidates with strong interest in and willing to explore topics from, but not restricted to the following: i) Graph algorithms and theory, ii) Self-healing, byzantine and other forms of resilient algorithms, iii) Compact routing and memory limited algorithms, iv) Static and dynamic Leader election and consensus, v) Connections between distributed algorithms and research areas such as parameterised complexity, topology, combinatorics, communication complexity, spectral, algebraic tools, vi) Algorithmic game theory and decision making, vi) Modelling and application to modern networks such as IOT and SDN.
The successful candidate will work closely with active research groups centred around both CS theory and networks. In particular, the candidate can benefit from interaction with upcoming research on compact self-healing routing algorithms supported by EPSRC (EPSRC research grant EP/P021247/1).
Thanks to the Newton Fund and the Mexican Academy of Science (AMS) I get to spend six weeks in Mexico city visiting my colleague at UNAM, researching, what else but Compact Self-Healing Routing… and authentic Mexican food 😉
The first part of this post is by way of thanks to the Newton Fund , the UK academies (say, The Royal Society) and the Mexican Academy of Sciences (AMC) for a Newton mobility grant which will allow me to visit my colleague Armando Castaneda at UNAM for six weeks in August and September this year. The call funds upto three months of ‘foreign activity’ so I could have possibly asked for more time but I was unsure of being able to get away for so long. Armando managed to visit me at Belfast some time ago so this can be even thought of as a return visit! He even managed to time his visit to coincide with the Belfast Marathon (while he’s training for a marathon) – `curiosier and curioser’, as Alice would say!
So, what’s this resilient compact routing? About three years ago, I was fortunate to be an I-CORE postdoc with Danny Dolev and Armando was a postdoc at Technion. We started discussing ideas around routing and possibly due to my long line of work with self-healing algorithms (on which many blog posts may follow!) we started to gravitate towards the question: Can we route messages despite failures in the network? At about the same time, Shiri Chechik bested our Leader Election paper (On the complexity of universal leader election) with her paper Compact routing schemes with improved stretch for the best paper award at PODC 2013. With many life-changing events in-between (such as getting faculty positions and moving to many degrees drop in average temperature and many degrees more of precipitation!), the first paper in this line has just managed to struggle over in early 2016. Compact Routing messages in self-healing trees (Arxiv) was a finalist for the best paper award in ICDCN 2016.
So, what’s this resilient compact routing (take 2)? Routing is a very important `primitive’ for networks – the ability of the network to take a message from a source node and deliver it to a target. We encounter it every time we get onto a network- as soon as we connect to a router, to a website, send an email, make a skype/voip call etc.. In practice, the most used protocols for routing are based on well known standard graph distance finding algorithms such as Djikstra’s and Bellman-Ford, which itself is a testament to the longevity of these algorithms and to the power of graph algorithms, in general. If a node x gets a packet which started at node a and needs to end at node b, node x will refer to it’s routing table – a table which tells it which of its neighbours to send its packet to.
Often, a routing table will contain an entry for every node in the network telling where to forward a message addressed to that node. Now, this means the table can be really really huge depending on the size of the network. In practice, there are ways around this. One way, which makes for some nice theory is to do some preprocessing on the network e.g. build spanning trees, do DFS traversal, maybe some renaming and port changes, to reduce the size of the routing tables and the packet header. The crux of many of these schemes seems to be (the now seemingly simple) idea of interval routing (which by itself may not be compact)introduced by Santaro and Khatib in 1982. The idea may be summarised as follows:
Starting from a particular node, do a Depth First Search (DFS) traversal and construct the corresponding DFS tree. Also, give each node a label that is that node’s DFS number (ID) (say, the time step at which that node was first encountered in the DFS traversal).
Now, at each node, if you store the ‘largest’ ID of its subtree and the IDs of its children, you can now get intervals – which tell you which neighbour in the tree a node should send a packet addressed to another node. Hence, the name Interval Routing.
This spawned an active and productive field of research for the last few decades particularly in the effort of reducing both the size of the labels and tables while reducing the necessary tradeoff to be paid in terms of the distance. It is well known that if you reduce the space you use for routing, you cannot use the shortest paths and must pay in some measure by using a longer path (this measure is called stretch).
Looking at the above description, one will notice that developing these data structures seem to rely heavily on the initial DFS traversal. This implies that one may need to do a lot of recomputation if the network changes. In this spirit, there are some (but not many) works on `resilient’ compact routing i.e. compact routing that can handle changes to the network. Ours is one such attempt – in which we show how to do the well know Thorup-Zwick routing (over trees for now) in the self-healing model. In brief, the self-healing model is a responsive model of resilience where an adversary chooses a node/processor to take down or even insert (presumably to do the most damage in either case) and the neighbours of the attacked node/processor react by adding some edges/connections to the graph/network. We show how to both the compact routing and self-healing in low-memory (O(log^n) at most, where n is the number of nodes in the network).
I will defer the technical details of both self-healing and our compact self-healing routing to another post but needless to say, I am quite excited about continuing this work!
What are the distributed algorithms behind cell communication? Stuck in a sandpit, I and colleagues at QUB gather up some ideas which will hopefully also find some applications.
Ever been in a sandpit? When I was asked to be in one (called the Applied Maths Sandpit at our new EPS faculty.) I was not sure what it would be. It could be an innocent fun activity in the sand as in the first picture, an unlikely dream holiday at a golf course (as in the second picture), or, most likely, a grueling day which I was not forewarned of (third picture). As it turns out, it was rather nice and, ultimately, useful. There was no sand around, of course! (and neither was there much warm Sun, unsurprisingly for here).
A U.S. Soldier roughing it out in a sandpit! [http://www.defense.gov/Media/Photo-Gallery?igphoto=2001253076]
For a start, I met a few brilliant colleagues I did not know existed. Then, in a day of real intense cut-throat Dragon’s den like competition, we pitched our, mostly half-baked, ideas (btw, I am joking about the ‘cut-throat’, after all, we were (applied) mathematicians, not MBA enterpreneurs at the gathering!). Amazingly, at the end of it, our emergently formed motley crew of me, Fred Currell, Thomas Huettemann, Dermot Green and Alessandro Ferraro (All except me from QUB Maths and Physics) have been given resources to recruit and spoil a PhD student for a proposal that makes up the ideas of this post.
So, here comes a very high level, sparse, ambitious, and rough sketch:
Living organisms can be thought of as clusters of cells in communication (typically at gap-junction interfaces). Within interacting communities new cells are born and old ones are removed, through (sometimes programmed) death. There is a strong environmental influence on these processes. On a much smaller scale, quantum-mechanical processes are at work within cells, complicating the picture further. We think real life processes are efficient, fault-tolerant, self-healing and scalable, leading us to hypothise that there must be powerful distributed algorithms somewhere in these networks of cells waiting to be discovered.
Networks are often modelled as a graphs: the cells are nodes and a common surface between two cells facilitates communication. In biological systems, things change and this dynamism in networks is often addressed by failure models, including adversarial and accidental (random) death. The network can react to these changes in various ways and we seek a mathematical framework in which to formulate and analyse the various questions arising.
The team thought of three systems which could be of interest (some of the team already work on these though I know little about them at the moment):
Volvocine green algae — These capture the evolutionary emergence of multicellularity, including what is believed to be the simplest multi-cellular eukaryotic organism: Tetrabaena socialis.
Light harvesting in photosynthetic organisms — the mechanism whereby living organisms harvest energy from light is believed to be one of the clearest biological systems countering the view that life is too “warm and wet” for quantum phenomena to be relevant.
A quantum machine for efficient light-energy harvesting (from the paper)
The tumour spheroid — a simple multicellular mimic of a tumour, this system is amenable to direct laboratory study and is known to show many of the hallmarks of cancer, including the lack of growth regulation mechanisms, meaning it seeks to grow avidly.
A study showing effects on size, shape and growth rate of tumours (from the paper)